Malware Bytes

One click on this fake Google Meet update can give attackers control of your PC

Malware Bytes Security - Fri, 03/06/2026 - 2:35pm

A phishing page disguised as a Google Meet update notice is silently handing victims’ Windows computers to an attacker-controlled management server. No password is stolen, no files are downloaded, and there are no obvious red flags.

It just takes a single click on a convincing Google Meet fake update prompt to enroll your Windows PC into an attacker-controlled device management system.

“To keep using Meet, install the latest version”

The social engineering is almost embarrassingly simple: an app update notice in the right brand colors.

The page impersonates Google Meet well enough to pass a casual glance. But neither the Update now button nor the Learn more link below it goes anywhere near Google.

Both trigger a Windows deep link using the ms-device-enrollment: URI scheme. That’s a handler built into Windows so IT administrators can send staff a one-click device enrollment link. The attacker has simply pointed it at their own server instead.

What “enrollment” actually means for your machine

The moment a visitor clicks, Windows bypasses the browser and opens its native Set up a work or school account dialog. That’s the same prompt that appears when a corporate IT team provisions a new laptop.

The URI arrives pre-populated: The username field reads collinsmckleen@sunlife-finance.com (a domain impersonating Sun Life Financial), and the server field already points to the attacker’s endpoint at tnrmuv-api.esper[.]cloud.

The attacker isn’t trying to perfectly impersonate the victim’s identity. The goal is simply to get the user to click through a trusted Windows enrollment workflow, which grants device control regardless of whose name appears in the form. Campaigns like this rarely expect everyone to fall for them. Even if most people stop, a small percentage continuing is enough for the attack to succeed.

A victim who clicks Next and proceeds through the wizard will hand their machine to an MDM (mobile device management) server they have never heard of.

MDM (Mobile Device Management) is the technology companies use to remotely administer employee devices. Once a machine is enrolled, the MDM administrator can silently install or remove software, enforce or change system settings, read the file system, lock the screen, and wipe the device entirely, all without the user’s knowledge.

There is no ongoing malware process to detect, because the operating system itself is doing the work on the attacker’s behalf.

The attacker’s server is hosted on Esper, a legitimate commercial MDM platform used by real enterprises.

Decoding the Base64 string embedded in the server URL reveals two pre-configured Esper objects: a blueprint ID (7efe89a9-cfd8-42c6-a4dc-a63b5d20f813) and a group ID (4c0bb405-62d7-47ce-9426-3c5042c62500). These represent the management profile that will be applied to any enrolled device.

The ms-device-enrollment: handler works exactly as Microsoft designed it, and Esper works exactly as Esper designed it. The attacker has simply pointed both at someone who never consented.

No malware, no credential theft. That’s the problem.

There is no malicious executable here, and no phished Microsoft login.

The ms-device-enrollment: handler is a documented, legitimate Windows feature that the attacker has simply redirected.

Because the enrollment dialog is a real Windows system prompt rather than a spoofed web page, it bypasses browser security warnings and email scanners looking for credential-harvesting pages.

The command infrastructure runs on a reputable SaaS platform, so domain-reputation blocking is unlikely to help.

Most conventional security tools have no category for “legitimate OS feature pointed at hostile infrastructure.”

The broader trend here is one the security industry has been watching with growing concern: attackers abandoning malware payloads in favor of abusing legitimate operating system features and cloud platforms.

What to do if you think you’ve been affected

Because the attack relies on legitimate system features rather than malware, the most important step is checking whether your device was enrolled.

  • Check whether your device was enrolled:
    • Open Settings > Accounts > Access work or school.
    • If you see an entry you don’t recognize, especially one referencing sunlife-finance[.]com or esper[.]cloud, click it and select Disconnect.
  • If you clicked “Update now” on updatemeetmicro[.]online and completed the enrollment wizard, treat your device as potentially compromised.
  • Run an up-to-date, real-time anti-malware solution to check for any secondary payloads the MDM server may have pushed after enrollment.
  • If you are an IT administrator, consider whether your organization needs a policy blocking unapproved MDM enrollment. Microsoft Intune and similar tools can restrict which MDM servers Windows devices are allowed to join.

We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

Categories: Malware Bytes

Beware of fake OpenClaw installers, even if Bing points you to GitHub

Malware Bytes Security - Fri, 03/06/2026 - 6:11am

Attackers are abusing OpenClaw’s popularity by seeding fake “installers” on GitHub, boosted by Bing AI search results, to deliver infostealers and proxy malware instead of the AI assistant users were looking for.

OpenClaw is an open‑source, self‑hosted AI agent that runs locally on your machine with broad permissions: it can read and write files, run shell commands, interact with chat apps, email, calendars, and cloud services. In other words, if you wire it into your digital life, it may end up handling access to a lot of sensitive data.

And, as is often the case, popularity brings brand impersonation. According to researchers at Huntress, attackers created malicious GitHub repositories posing as OpenClaw Windows installers, including a repo called openclaw-installer. These were added on February 2 and stayed up until roughly February 10, when they were reported and removed.

Bing search results pointed victims to these GitHub repositories. But when the victim downloaded and ran the fake installer, it didn’t give them OpenClaw at all. The installer dropped Vidar, a well‑known information stealer, directly into memory. In some cases, the loader also deployed GhostSocks, effectively turning the victim’s system into a residential proxy node criminals could route their traffic through to hide their activities.

How to stay safe

The good news is that the campaign appears to have been short-lived, and there are clear indicators and mitigations you can use.

If you downloaded an OpenClaw installer recently from GitHub after searching “OpenClaw Windows” in Bing, especially in early February, you should assume your system is compromised until proven otherwise.

Vidar can steal browser credentials, crypto wallets, and data from applications like Telegram. GhostSocks silently turns your machine into a proxy node for other people’s traffic. That’s not just a privacy issue. It can drag you into abuse investigations when someone else’s attacks appear to come from your IP address.

If you suspect you ran a fake installer:

  • Disconnect the machine from your network, then run a full system scan with a reputable, up‑to‑date anti‑malware solution.
  • Change passwords for critical services (email, banking, cloud, developer accounts) and do that on a different, clean device.
  • Review recent logins and sessions for unusual activity, and enable multi‑factor authentication (MFA) where you haven’t already.

If you’re still intent on using OpenClaw:

  • Run OpenClaw (or similar agents) in a sandboxed VM or container on isolated hosts, with default‑deny egress and tightly scoped allow‑lists.
  • Give the runtime its own non‑human service identities, least privilege, short token lifetimes, and no direct access to production secrets or sensitive data.
  • Treat skill/extension installation as introducing new code into a privileged environment: restrict registries, validate provenance, and monitor for rare or newly seen skills.
  • Log and periodically review agent memory/state and behavior for durable instruction changes, especially after ingesting untrusted content or shared feeds.
  • Understand and provide for the event where you may need to nuke‑and‑pave: keep non‑sensitive state snapshots handy, document a rebuild and credential‑rotation playbook, and rehearse it.
  • Run an up-to-date, real-time anti-malware solution that can detect information stealers and other malware.

We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

Categories: Malware Bytes

Fake CleanMyMac site installs SHub Stealer and backdoors crypto wallets

Malware Bytes Security - Fri, 03/06/2026 - 3:44am

A convincing fake version of the popular Mac utility CleanMyMac is tricking users into installing malware.

The site instructs visitors to paste a command into Terminal. If they do, it installs SHub Stealer, macOS malware designed to steal sensitive data including saved passwords, browser data, Apple Keychain contents, cryptocurrency wallets, and Telegram sessions. It can even modify wallet apps such as Exodus, Atomic Wallet, Ledger Wallet, and Ledger Live so attackers can later steal the wallet’s recovery phrase.

The site impersonates the CleanMyMac website, but is unconnected to the legitimate software or the developers, MacPaw.

Remember: Legitimate apps almost never require you to paste commands into Terminal to install them. If a website tells you to do this, treat it as a major red flag and do not proceed. When in doubt, download software only from the developer’s official website or the App Store.

Read the deep-dive to see what we discovered.

“Open Terminal and paste the following command”

The attack begins at cleanmymacos[.]org, a website designed to look like the real CleanMyMac product page. Visitors are shown what appears to be an advanced installation option of the kind a power user might expect. The page instructs them to open Terminal, paste a command, and press Return. There’s no download prompt, disk image, or security dialog.

That command performs three actions in quick succession:

  • First, it prints a reassuring line: macOS-CleanMyMac-App: https://macpaw.com/cleanmymac/us/app to make the Terminal output look legitimate.
  • Next, it decodes a base64-encoded link that hides the real destination.
  • Finally, it downloads a shell script from the attacker’s server and pipes it directly into zsh for immediate execution.

From the user’s perspective, nothing unusual happens.

This technique, known as ClickFix, has become a common delivery method for Mac infostealers. Instead of exploiting a vulnerability, it tricks the user into running the malware themselves. Because the command is executed voluntarily, protections such as Gatekeeper, notarization checks, and XProtect offer little protection once the user pastes the command and presses Return.

Geofencing: Not everyone gets the payload

The first script that arrives on the victim’s Mac is a loader, which is a small program that checks the system before continuing the attack.

One of its first checks looks at the macOS keyboard settings to see whether a Russian-language keyboard is installed. If it finds one, the malware sends a cis_blocked event to the attacker’s server and exits without doing anything else.

This is a form of geofencing. Malware linked to Russian-speaking cybercriminal groups often avoids infecting machines that appear to belong to users in CIS countries (the Commonwealth of Independent States, which includes Russia and several neighboring nations). By avoiding systems that appear to belong to Russian users, the attackers reduce the risk of attracting attention from local law enforcement.

The behavior does not prove where SHub was developed, but it follows a pattern long observed in that ecosystem, where malware is configured not to infect systems in the operators’ own region.

If the system passes this check, the loader sends a profile of the machine to the command-and-control server at res2erch-sl0ut[.]com. The report includes the device’s external IP address, hostname, macOS version, and keyboard locale.

Each report is tagged with a unique build hash, a 32-character identifier that acts as a tracking ID. The same identifier appears in later communications with the server, allowing the operators to link activity to a specific victim or campaign.

“System Preferences needs your password to continue”

Comparing payloads served with and without a build hash reveals another campaign-level field in the malware builder: BUILD_NAME. In the sample tied to a build hash, the value is set to PAds; in the version without a hash, the field is empty. The value is embedded in the malware’s heartbeat script and sent to the command-and-control (C2) server during every beacon check-in alongside the bot ID and build ID.

What PAds stands for cannot be confirmed from the payload alone, but its structure matches the kind of traffic-source tag commonly used in pay-per-install or advertising campaigns to track where infections originate. If that interpretation is correct, it suggests victims may be reaching the fake CleanMyMac site through paid placements rather than organic search or direct links.

Once the loader confirms a viable target, it downloads and executes the main payload: an AppleScript hosted at res2erch-sl0ut[.]com/debug/payload.applescript. AppleScript is Apple’s built-in automation language, which allows the malware to interact with macOS using legitimate system features. Its first action is to close the Terminal window that launched it, removing the most obvious sign that anything happened.

Next comes the password harvest. The script displays a dialog box that closely mimics a legitimate macOS system prompt. The title reads “System Preferences”, the window shows Apple’s padlock icon, and the message says:

The awkward wording—“for continue” instead of “to continue”—is one clue the prompt is fake, though many users under pressure might not notice it.

“Required Application Helper. Please enter password for continue.”

If the user enters their password, the malware immediately checks whether it is correct using the macOS command-line tool dscl. If the password is wrong, it is logged and the prompt appears again. The script will repeat the prompt up to ten times until a valid password is entered or the attempts run out.

That password is valuable because it unlocks the macOS Keychain, Apple’s encrypted storage system for saved passwords, Wi-Fi credentials, app tokens, and private keys. Without the login password, the Keychain database is just encrypted data. With it, the contents can be decrypted and read.

A systematic sweep of everything worth stealing

With the password in hand, SHub begins a systematic sweep of the machine. All collected data is staged in a randomly named temporary folder—something like /tmp/shub_4823917/—before being packaged and sent to the attackers.

The browser targeting is extensive. SHub searches 14 Chromium-based browsers (Chrome, Brave, Edge, Opera, OperaGX, Vivaldi, Arc, Sidekick, Orion, Coccoc, Chrome Canary, Chrome Dev, Chrome Beta, and Chromium), stealing saved passwords, cookies, and autofill data from every profile it finds. Firefox receives the same treatment for stored credentials.

The malware also scans installed browser extensions, looking for 102 known cryptocurrency wallet extensions by their internal identifiers. These include MetaMask, Phantom, Coinbase Wallet, Exodus Web3, Trust Wallet, Keplr, and many others.

Desktop wallet applications are also targeted. SHub collects local storage data from 23 wallet apps, including Exodus, Electrum, Atomic Wallet, Guarda, Coinomi, Sparrow, Wasabi, Bitcoin Core, Monero, Litecoin Core, Dogecoin Core, BlueWallet, Ledger Live, Ledger Wallet, Trezor Suite, Binance, and TON Keeper. Each wallet folder is capped at 100 MB to keep the archive manageable.

Beyond wallets and browsers, SHub also captures the macOS Keychain directory, iCloud account data, Safari cookies and browsing data, Apple Notes databases, and Telegram session files—information that could allow attackers to hijack accounts without knowing the passwords.

It also copies shell history files (.zsh_history and .bash_history) and .gitconfig, which often contain API keys or authentication tokens used by developers.

All of this data is compressed into a ZIP archive and uploaded to res2erch-sl0ut[.]com/gate along with a hardcoded API key identifying the malware build. The archive and temporary files are then deleted, leaving minimal traces on the system.

The part that keeps stealing after you’ve cleaned up

Most infostealers are smash-and-grab operations: they run once, take everything, and leave. SHub does that, but it also goes a step further.

If it finds certain wallet applications installed, it downloads a replacement for the application’s core logic file from the attacker’s server and swaps it in silently. We retrieved and analyzed five such replacements. All five were backdoored, each tailored to the architecture of the target application.

The targets are Electron-based apps. These are desktop applications built on web technologies whose core logic lives in a file called app.asar. SHub kills the running application, downloads a replacement app.asar from the C2 server, overwrites the original inside the application bundle, strips the code signature, and re-signs the app so macOS will accept it. The process runs silently in the background.

The five confirmed crypto wallet apps are Exodus, Atomic Wallet, Ledger Wallet, Ledger Live, and Trezor Suite.

Exodus: silent credential theft on every unlock

On every wallet unlock, the modified app silently sends the user’s password and seed phrase to wallets-gate[.]io/api/injection. A one-line bypass is added to the network filter to allow the request through Exodus’s own domain allowlist.

Atomic Wallet: the same exfiltration, no bypass required

On every unlock, the modified app sends the user’s password and mnemonic to wallets-gate[.]io/api/injection. No network filter bypass is required—Atomic Wallet’s Content Security Policy already allows outbound HTTPS connections to any domain.

Ledger Wallet: TLS bypass and a fake recovery wizard

The modified app disables TLS certificate validation at startup. Five seconds after launch, it replaces the interface with a fake three-page recovery wizard that asks the user for their seed phrase and sends it to wallets-gate[.]io/api/injection.

Ledger Live: identical modifications

Ledger Live receives the same modifications as Ledger Wallet: TLS validation is disabled and the user is presented with the same fake recovery wizard.

Trezor Suite: fake security update overlay

After the application loads, a full-screen overlay styled to match Trezor Suite’s interface appears, presenting a fake critical security update that asks for the user’s seed phrase. The phrase is validated using the app’s own bundled BIP39 library before being sent to wallets-gate[.]io/api/injection.

At the same time, the app’s update mechanism is disabled through Redux store interception so the modified version remains in place.

Five wallets, one endpoint, one operator

Across all five modified applications, the exfiltration infrastructure is identical: the same wallets-gate[.]io/api/injection endpoint, the same API key, and the same build ID.

Each request includes a field identifying the source wallet—exodus, atomic, ledger, ledger_live, or trezor_suite—allowing the backend to route incoming credentials by product.

This consistency across five independently modified applications strongly suggests that a single operator built all of the backdoors against the same backend infrastructure.

A persistent backdoor disguised as Google’s own update service

To maintain long-term access, SHub installs a LaunchAgent, which is a background task that macOS automatically runs every time the user logs in. The file is placed at:

~/Library/LaunchAgents/com.google.keystone.agent.plist

The location and name are chosen to mimic Google’s legitimate Keystone updater. The task runs every sixty seconds.

Each time it runs, it launches a hidden bash script located at:

~/Library/Application Support/Google/GoogleUpdate.app/Contents/MacOS/GoogleUpdate

The script collects a unique hardware identifier from the Mac (the IOPlatformUUID) and sends it to the attacker’s server as a bot ID. The server can respond with base64-encoded commands, which the script decodes, executes, and then deletes.

In practice, this gives the attackers the ability to run commands on the infected Mac at any time until the persistence mechanism is discovered and removed.

The final step is a decoy error message shown to the user:

“Your Mac does not support this application. Try reinstalling or downloading the version for your system.”

This explains why CleanMyMac appeared not to install and sends the victim off to troubleshoot a problem that doesn’t actually exist.

SHub’s place in a growing family of Mac stealers

SHub is not an isolated creation. It belongs to a rapidly evolving family of AppleScript-based macOS infostealers including campaigns such as MacSync Stealer (an expanded version of malware known as Mac.c, first seen in April 2025) and Odyssey Stealer, and shares traits with other credential-stealing malware such as Atomic Stealer.

These families share a similar architecture: a ClickFix delivery chain, an AppleScript payload, a fake System Preferences password prompt, recursive data harvesting functions, and exfiltration through a ZIP archive uploaded to a command-and-control server.

What distinguishes SHub is the sophistication of its infrastructure. Features such as per-victim build hashes for campaign tracking, detailed wallet targeting, wallet application backdooring, and a heartbeat system capable of running remote commands all suggest an author who studied earlier variants and invested heavily in expanding them. The result resembles a malware-as-a-service platform rather than a simple infostealer.

The presence of a DEBUG tag in the malware’s internal identifier, along with the detailed telemetry it sends during execution, suggests the builder was still under active development at the time of analysis.

The campaign also fits a broader pattern of brand impersonation attacks. Researchers have documented similar ClickFix campaigns impersonating GitHub repositories, Google Meet, messaging platforms, and other software tools, with each designed to convince users that they are following legitimate installation instructions. The cleanmymacos.org site appears to follow the same playbook, using a well-known Mac utility as the lure.

What to do if you may have been affected

The most effective part of this attack is also its simplest: it convinces the victim to run the malicious command themselves.

By presenting a Terminal command as a legitimate installation step, the campaign sidesteps many of macOS’s built-in protections. No app download is required, no disk image is opened, and no obvious security warning appears. The user simply pastes the command and presses Return.

This reflects a broader trend: macOS is becoming a more attractive target, and the tools attackers use are becoming more capable and more professional. SHub Stealer, even in its current state, represents a step beyond many earlier macOS infostealers.

For most users, the safest rule is also the simplest: install software only from the App Store or from a developer’s official website. The App Store handles installation automatically, so there is no Terminal command, no guesswork, and no moment where you have to decide whether to trust a random website.

  • Do not run the command. If you have not yet executed the Terminal command shown on cleanmymacos[.]org or a similar site, close the page and do not return.
  • Check for the persistence agent. Open Finder, press Cmd + Shift + G, and navigate to ~/Library/LaunchAgents/.
    If you see a file named com.google.keystone.agent.plist that you did not install, delete it. Also check: ~/Library/Application Support/Google/. If a folder named GoogleUpdate.app is present and you did not install it, remove it.
  • Treat your wallet seed phrase as compromised. If you have Exodus, Atomic Wallet, Ledger Live, Ledger Wallet, or Trezor Suite installed and you ran this command, assume your seed phrase and wallet password have been exposed. Move your funds to a new wallet created on a clean device immediately. Seed phrases cannot be changed, and anyone with a copy can access the wallet.
  • Change your passwords. Your macOS login password and any passwords stored in your browser or Keychain should be considered exposed. Change them from a device you trust.
  • Revoke sensitive tokens. If your shell history contained API keys, SSH keys, or developer tokens, revoke and regenerate them.
  • Run Malwarebytes for Mac. It can detect and remove remaining components of the infection, including the LaunchAgent and modified files.
Indicators of compromise (IOCs) Domains
  • cleanmymacos[.]org — phishing site impersonating CleanMyMac
  • res2erch-sl0ut[.]com — primary command-and-control server (loader delivery, telemetry, data exfiltration)
  • wallets-gate[.]io — secondary C2 used by wallet backdoors to exfiltrate seed phrases and passwords

We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

Categories: Malware Bytes

Windows File Shredder: When deleting a file isn’t enough

Malware Bytes Security - Thu, 03/05/2026 - 6:07am

Most of us think deleting a file means it’s gone for good. But “delete” on a Windows device often just means “out of sight,” not necessarily “out of reach.”

That’s where File Shredder, a new feature within Malwarebytes Tools for Windows, comes in. File Shredder lets you securely delete files from your hard drive or USB drive, so the files are not just removed—but completely unrecoverable, even with specialized recovery software.

What File Shredder does differently

When you delete a file by placing it in your Recycle Bin and emptying the contents, your computer typically removes the reference to it—but the data itself can remain on the drive until it’s overwritten. That leftover data can sometimes be recovered using basic digital tools, some of which can even be downloaded for free online. These data traces pose a problem if the file you want to delete includes personal, financial, or other sensitive information, like tax documents, scanned IDs, contracts, or anything else you would like to remain private forever.

File Shredder goes beyond standard deletion by instead permanently overwriting the file data, ensuring it can’t be reconstructed or recovered. Once a file is shredded, it’s gone for good—no undo, no recovery, no second chances.

That makes File Shredder especially useful when:

  • You’re cleaning up sensitive files before selling or donating a device
  • You need to securely remove files from a USB drive
  • You’re minimizing digital clutter without leaving data behind
  • You want peace of mind that private files stay private
How to use File Shredder

File Shredder is designed to be powerful without being complicated.

To use File Shredder:

  • Open the Malwarebytes app and select the “Tools” icon from the lefthand menu (the screwdriver and wrench icon)
  • From this menu, find and click on “File Shredder”
  • Once here, you can manually add files or folders to the list and then click on the button “Delete permanently”
  • You will be asked to confirm your request before File Shredder deletes the files
  • The Malwarebytes Tools screen
  • Manually select files and folders for deletion
  • Confirm your deletion requests
  • Done!

After your files are deleted by File Shredder you can move on, confident that the data can’t be accessed again.

Protection means your data is in your control

Cybersecurity isn’t just about blocking threats—it’s also about giving you control over your own data. File Shredder provides a way to do exactly that, helping you close the door on files that you no longer want on your devices.

Because when you’re done with a file, it should really be done.

We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

Categories: Malware Bytes

Supreme Court to decide whether geofence warrants are constitutional

Malware Bytes Security - Thu, 03/05/2026 - 5:54am

Google has weighed in on a court case that will decide the future of a powerful but contentious tool for law enforcement. The company submitted an opinion to the US Supreme Court arguing that geofence warrants are unconstitutional.

A geofence warrant is a form of “reverse warrant” that turns a regular warrant on its head. Police get a regular warrant when they want to target a particular person. With a reverse warrant, police don’t know exactly who they’re looking for. Instead, they ask someone (typically a technology company) for a broad data set about a group of unknown people based on some common behavior. Then they analyze that data set for potential suspects.

With a geofence warrant, that data set is defined by a location and a time window. Law enforcement officials obtain a list of phones that were in that area during that period. Every device that was inside the circle comes back in the results, even if nobody on that list has been suspected of anything. Proximity is the only criterion.

That’s how Okello Chatrie was charged with armed bank robbery in Virginia in 2019: His phone showed up in a geofence warrant covering 17.5 acres (larger than three football fields). He argued that this kind of search isn’t constitutional and shouldn’t have been used as evidence.

In 2024, the Fifth Circuit Court of Appeals agreed with him, overturning a Fourth Circuit ruling. Now prosecutors have taken the case to the Supreme Court, with parties due to make oral arguments on April 27.

The case has seen a flurry of amicus curiae briefs, which are opinions from interested expert parties that have no direct involvement in the case. One of these is from Google, which on Monday urged the justices to consider the geofence warrants unconstitutional because of their broad scope. It has objected to more than 3,000 of them on constitutional grounds in recent months.

Google’s brief stated:

“Many of these overbroad warrants swept in hundreds, sometimes even thousands, of innocent people. State and federal courts have repeatedly granted Google’s motions to quash these overbroad warrants.”

How the database gets built

Although Google is just one of many organizations that filed amicus briefs, its position is especially notable because it has historically collected so much location data. Its Timeline feature (formerly Location History) logs device position via GPS, Wi-Fi networks, Bluetooth, and mobile signals, including when Google apps aren’t being used, according to its policy page.

At the time of the Chatrie warrant, it was recording position as frequently as every two minutes. All of that fed a centralised internal database which held 592 million individual accounts. So responding to any geofence request required Google to search essentially the entire store before producing a single name, according to an analysis by privacy advocacy group EPIC, which also regularly submits amicus briefs on privacy cases.

Google moved Timeline storage from its own servers onto users’ devices in July 2025, closing the door to fresh cloud-based requests against its own systems. But the constitutional question survives for historical data and for any company that has not followed suit.

The warrant that grew and grew

A geofence warrant does not stay fenced, according to a separate brief that the Center for Democracy and Technology (CDT) filed in the case last week. It said Google’s standard response to warrants had three steps. First it would deliver an anonymized list of devices inside the geofence. Then, police could ask for movement data on chosen “devices of interest,” which could track them outside the geographic boundary and beyond the original time window. Finally, again without any further judicial approval, police could ask for subscriber-identifying information for whichever devices police chose to unmask.

In the Chatrie case, positioning data was imprecise enough that, as the district court found, the warrant may have included devices outside the intended area. According to the CDT brief:

“The Geofence Warrant could have captured the location of someone who was hundreds of feet outside the geofence.”

The CDT argues in its brief that this can expose the privacy of people going about their everyday lives, engaging in legal activities that they might not want others to know about. The warrant that scooped up Chatrie included a hotel and a restaurant.

Some of these requests are far broader. Google successfully challenged a warrant asking for the location history of anyone in large portions of San Francisco for two and a half days, it said. Google complained in its brief:

“No court would authorize a physical search of hundreds of people or places, yet geofence warrants sometimes do so by design.”

What can you do to stop yourself getting swept up in a geofencing search?

If your phone stores detailed location history with Google, that data may be included in geofence warrant responses. Limiting what gets saved can reduce how much location information exists in the first place.

There are two Google settings that matter: Timeline (Location History) and Web & App Activity. Turning off one does not automatically disable the other.

Timeline stores a detailed record of where your device has been, although it’s off by default. Web & App Activity can also log location signals when you use Google services like Search, Maps, or other apps.

Google provides instructions on how to review and disable these settings in its support documentation:

Google has previously settled lawsuits accusing it of misleading users about how location data is stored across these settings, so reviewing both controls is important.

Reverse warrants may not stop at location data

The implications of the case extend well past maps, though. The CDT brief warns that if courts endorse the logic behind geofence warrants, then law enforcement may try to apply the same approach to other large datasets held by technology companies, such as AI chatbot data. That’s a step the DHS has already taken, issuing what has been reported as the first known warrant for ChatGPT user data.

We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

Categories: Malware Bytes

Does the UK really want to ban VPNs? And can it be done?

Malware Bytes Security - Wed, 03/04/2026 - 8:44am

The idea of a “Great British Firewall” makes for a catchy headline, but it would be riddled with holes and cause huge problems.

The Guardian reports that the GCHQ (Government Communications Headquarters), a UK intelligence, security, and cyber agency, is exploring the idea of a British firewall offering protection against malicious hackers. It falls within its remit, but one of the measures reportedly discussed—banning VPN software—raises practical and technical questions.

Here’s what you actually need to know, and why you shouldn’t panic about your VPN just yet.

  • There are no current plans on the statute books to ban VPNs for everyone. Ministers and regulators explicitly acknowledge VPNs as lawful services with legitimate uses.
  • The current political focus is on “online safety”, especially kids accessing porn and harmful content, and how VPNs can undermine the Online Safety Act’s age‑assurance and filtering regime.
  • The latest move is an online‑safety consultation that explicitly mentions “options to age-restrict or limit children’s VPN use where it undermines safety protections”, not an outright nationwide ban.

So what may happen is tighter controls around minors, and perhaps pressure on app stores and platforms, rather than a blanket prohibition for adults.

Options

Technically speaking, these are some of the measures available to address VPNs bypassing geo-blocking and local legislation.

  • App‑store and download pressure: Require Apple/Google to hide or age‑gate VPN apps for UK accounts, or block listing of some consumer VPNs. This raises friction for non‑technical users but is trivial to route around (sideloading where possible, non‑UK stores, manual configs).
  • Commercial provider lists: Buy accounts at popular VPNs, enumerate exit IP ranges, and require ISPs or certain sites (e.g. porn sites) to block those IPs. This can catch a large chunk of mainstream VPN traffic but is high‑maintenance and easy to evade with IP rotation, residential proxies, self‑hosted VPNs, and lesser‑known services.
  • Targeted site‑level blocking of VPNs: Require certain categories of sites (e.g. adult sites) to reject traffic that appears to come from VPN IPs, an idea already floated by some experts as more likely than an outright technology ban. That still leaves VPNs usable for everything else, including general browsing and work.
  • Age‑based device/network controls: Mandate school networks, child‑oriented devices, or parental control routers to block known VPN endpoints and app traffic, as media regulator Ofcom and others have suggested may be possible at the home‑router level. Again, this targets minors rather than adults and is only as strong as the weakest network they connect to (a friend’s Wi‑Fi, mobile hotspot, etc.).

All of these are “making it harder” tactics rather than a hard technical kill switch.

Why a watertight VPN ban is essentially impossible

To comprehensively block VPNs, the government would need to require internet providers to inspect traffic, restrict apps from app stores, and attempt to cut off access to thousands of VPN servers worldwide. That would be a massive, expensive, and deeply complicated undertaking—and it still wouldn’t work.

Problem 1: VPNs are basically invisible

Modern VPNs are designed to look very similar to normal web browsing. When you load a website over HTTPS (the padlock in your browser) and when you connect to a VPN, the traffic flowing through your internet connection looks almost identical. Reliably telling them apart is a bit like trying to spot which cars on a motorway are taxis versus private vehicles based solely on their tire tread patterns at motorway speed, for every car, in real time. You’d end up accidentally blocking huge amounts of perfectly ordinary internet traffic in the attempt.

Problem 2: Too many legitimate users depend on VPNs

VPNs aren’t just for privacy-conscious consumers. They’re how millions of people securely connect to their workplace from home. The NHS (the UK’s National Health Service) uses them for remote access. Journalists use them to protect sources. Researchers use them to access academic resources. Any serious enforcement effort would have to grapple with the risk of collateral damage to businesses and public services.

Problem 3: The ban would be trivially easy to bypass

Even if the government successfully blocked every major commercial VPN app and service, technically skilled users could simply rent a cheap server anywhere in the world and set up their own private tunnel in under ten minutes. There are also tools designed to evade exactly this kind of blocking, disguising encrypted traffic as ordinary web activity.

We know this because Russia has been trying to block VPNs for years, using the full weight of state enforcement behind it. But VPN usage in Russia has surged, not declined. Blocked services pop up under new names and addresses and new tools emerge overnight. This track record suggests that long-term, comprehensive suppression is difficult, even with aggressive powers of enforcement.

What does this actually mean for UK citizens?

The government can probably make consumer VPN use slightly more inconvenient, removing apps from UK app stores, for instance, or creating legal grey areas for certain uses. But a genuine, technical ban on VPN software and encrypted connections is not realistically achievable without causing serious collateral damage to the UK’s digital economy and the millions of people who depend on this technology for entirely legitimate reasons.

Don’t ditch your VPN. The Great Firewall of Great Britain isn’t coming. And if it tried, it would have more holes than a fishing net.

Hat tip to Stefan Dasic and the Malwarebytes VPN team for their invaluable input.

We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

Categories: Malware Bytes

Attackers abuse OAuth’s built-in redirects to launch phishing and malware attacks

Malware Bytes Security - Wed, 03/04/2026 - 7:53am

Attackers are abusing normal OAuth error redirects to send users from a legitimate Microsoft or Google login URL to phishing or malware pages, without ever completing a successful sign‑in or stealing tokens from the OAuth flow itself.

That calls for a bit more explanation.

OAuth (Open Authorization) is an open-standard protocol for delegated authorization. It allows users to grant websites or applications access to their data on another service (for example, Google or Facebook) without sharing their password. 

OAuth redirection is the process where an authorization server sends a user’s browser back to an application (client) with an authorization code or token after user authentication.

Researchers found that phishers use silent OAuth authentication flows and intentionally invalid scopes to redirect victims to attacker-controlled infrastructure without stealing tokens.

So, what does this attack look like from a target’s perspective?

From the user’s perspective, the attack chain looks roughly like this:

The email

An email arrives with a plausible business lure. For example, you receive an email about something routine but urgent: document sharing or review, a Social Security or financial notice, an HR or employee report, a Teams meeting invite, or a password reset.​

The email body contains a link such as “View document” or “Review report,” or a PDF attachment that includes a link instead.​

The link

You click the link after seeing that it appears to be a normal Microsoft or Google login. The visible URL (what you see when you hover over it) looks convincing, starting with a trusted domain like https://login.microsoftonline.com/  or https://accounts.google.com/.

There is no obvious sign that the parameters (prompt=none, odd or empty scope, encoded state) are abnormal.​

Silent OAuth

The crafted URL attempts a silent OAuth authorization (prompt=none) and uses parameters that are guaranteed to fail (for example, an invalid or missing scope).​

The identity provider evaluates your session and conditional access, determines the request cannot succeed silently, and returns an OAuth error, such as interaction_required, access_denied, or consent_required.​

The redirect

By design, the OAuth server then redirects your browser, including the error parameters and state, to the app’s registered redirect URI, which in these cases is the attacker’s domain.​

To the user, this is just a quick flash of a Microsoft or Google URL followed by another page. It’s unlikely anyone would notice the errors in the query string.

Landing page

The target gets redirected to a page that looks like a legitimate login or business site. This could very well be a clone of a trusted brand’s site.

From here, there are two possible malicious scenarios:

Phishing / Attacker in the Middle (AitM) variant

A normal login page or a verification prompt, sometimes with CAPTCHAs or interstitials to look more trustworthy and bypass some controls.​

The email address may already be filled in because the attackers passed it through the state parameter.

When the user enters credentials and multi-factor authentication (MFA), the attacker‑in‑the‑middle toolkit intercepts them, including session cookies, while passing them along so the experience feels legitimate.​

Malware delivery variant

Immediately (or after a brief intermediate page), the browser hits a download path and automatically downloads a file.​

The context of the page matches the lure (“Download the secure document,” “Meeting resources,” and so on), making it seem reasonable to open the file.​

The target might notice the initial file open or some system slowdown, but otherwise the compromise is practically invisible.​

Potential impact

By harvesting credentials or planting a backdoor, the attacker now has a foothold on the system. From there, they may carry out hands-on-keyboard activity, move laterally, steal data, or stage ransomware, depending on their goals.

The harvested credentials and tokens can be used to access email, cloud apps, or other resources without the need to keep malware on the device.​

How to stay safe

Since the attacker does not need your token from this flow (only the redirect into their own infrastructure), the OAuth request itself may look less suspicious. Be vigilant and follow our advice:

  • If you rely on hovering over links, be extra cautious when you see very long URLs with oauth2, authorize, and lots of encoded text, especially if they come from outside your organization.
  • Even if the start of the URL looks legitimate, verify with a trusted sender before clicking the link.
  • If something urgent arrives by email and immediately forces you through a strange login or starts a download you did not expect, assume it is malicious until proven otherwise.
  • If you are redirected somewhere unfamiliar, stop and close the tab.
  • Be very wary of files that download immediately after clicking a link in an email, especially from /download/ paths.
  • If a site says you must “run” or “enable” something to view a secure document, close it and double-check which site you’re currently on. It might be up to something.
  • Keep your OS, browser, and your favorite security tools up to date. They can block many known phishing kits and malware downloads automatically.

Pro tip: use Malwarebytes Scam Guard to help you determine whether the email you received is a scam or not.

We don’t just report on scams—we help detect them

Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard. Submit a screenshot, paste suspicious content, or share a link, text or phone number, and we’ll tell you if it’s a scam or legit. Available with Malwarebytes Premium Security for all your devices, and in the Malwarebytes app for iOS and Android.

Categories: Malware Bytes

High-severity Qualcomm bug hits Android devices in targeted attacks

Malware Bytes Security - Wed, 03/04/2026 - 7:33am

Google has patched 129 vulnerabilities in Android in its March 2026 Android Security Bulletin, including a Qualcomm display flaw that is known to be actively exploited.

You can check your device’s Android version, security update level, and Google Play system update in Settings. You should get a notification when updates are available, but you can also check for them yourself.

On most phones, go to Settings > About phone (or About device), then tap Software updates to see if anything new is available. The exact steps may vary slightly depending on the brand and Android version you’re on.

If your Android phone shows a patch level of 2026-03-05 or later, these issues are fixed.

Keeping your device up to date protects you from known vulnerabilities and helps you stay safe. We know that because of patch gaps and end-of-support cycles, some users may not receive these updates. That’s why additional protection for your Android device is important.

Technical details

The Android zero-day, tracked as CVE-2026-21385, is a high‑severity bug in a Qualcomm graphics/display component that attackers are already exploiting in limited, targeted attacks.

The vulnerability lives in an open‑source Qualcomm graphics/display component used by a large number of Android chipsets, with Qualcomm listing that well over 230 different chipset models are affected. Based on recently published Android and chipset market‑share percentages, it is reasonable to assume the issue affects hundreds of millions of devices worldwide, even if the exact number is hard to pin down.

On most Android phones, you can view the processor model in Settings > About phone (or About device) > Detailed info and specs, and look for entries such as “Processor,” “Chipset,” or “SoC.” Names like “Snapdragon 8 Gen 2,” “Snapdragon 778G,” or “Qualcomm SM8xxx/SM7xxx,” indicate a Qualcomm chipset and that the device may be in the affected family.

Google says there are signs that CVE‑2026‑21385 is already being used in “limited, targeted exploitation,” which usually means a small number of high‑value targets rather than broad, drive‑by attacks on the general public. Current descriptions point to a memory corruption scenario in the graphics component. The official description says:

“Memory corruption while using alignments for memory allocation.”

This means that if an attacker can get a malicious app or local code onto the device, they can feed specially crafted data into the graphics component’s driver and corrupt memory in a controlled way. In practice, a bug like this is a good candidate for turning a normal app’s limited access into something much more powerful, like using it as a building block in a chain of exploits to escalate privileges or to escape a sandbox.

As you can see, the attacker needs some kind of local foothold first, such as getting you to install a malicious app, exploiting another vulnerability, or abusing a compromised app already on the device. 

How to stay safe

From the available information, attackers would need to trick a user into installing a malicious app that could then compromise the device. That’s why it’s a good idea to follow these safety precautions:

  • Only install apps from official app stores whenever possible and avoid installing apps promoted in links in SMS, email, or messaging apps.
  • Before installing finance‑related or retailer apps, verify the developer’s name, number of downloads, and user reviews rather than trusting a single promotional link.
  • Protect your devices. Use an up-to-date, real-time anti-malware solution like Malwarebytes for Android.
  • Scrutinize permissions. Does an app really need the permissions it’s requesting to do the job you want it to do? Especially if it asks for accessibility, SMS, or camera access.
  • Keep Android, Google Play services, and all other important apps up to date so you get the latest security fixes.

We don’t just report on phone security—we provide it

Cybersecurity risks should never spread beyond a headline. Keep threats off your mobile devices by downloading Malwarebytes for iOS, and Malwarebytes for Android today.

Categories: Malware Bytes

Pentagon ditches Anthropic AI over “security risk” and OpenAI takes over

Malware Bytes Security - Tue, 03/03/2026 - 11:05am

On Friday the US Pentagon cut ties with Anthropic, the company behind Claude AI. Defense Secretary Pete Hegseth designated the San Francisco-based company a “supply-chain risk to national security.”

The supply-chain risk designation means that no contractor, supplier, or partner doing business with the US military can deal with Anthropic. The label previously applied only to foreign adversaries like Huawei, though, and using it against a US company marks a rare escalation in a government-industry dispute. According to reports, President Donald Trump also ordered every federal agency to stop using Anthropic’s technology.

What Anthropic wouldn’t budge on

Anthropic called the designation “unlawful and politically motivated” and said it intends to challenge it in court.

At the center of the dispute is how far Anthropic believes its models should be allowed to go inside military systems. Anthropic, which was the first frontier AI company deployed on the military’s classified networks, wanted two contractual restrictions on its AI model Claude, as outlined in its response to the Pentagon’s announcement. It forbade the Pentagon to use its tech for the mass domestic surveillance of Americans and did not want its tech employed in fully autonomous weapons.

The Pentagon had previously demanded that all AI vendors agree to “all lawful purposes” language as part of their contracts. Anthropic told ABC that what the Pentagon finally offered left the door open for the government to violate the company’s no-surveillance and no-weapons clauses.

Defense Secretary Hegseth responded with a statement cancelling Anthropic’s $200m Pentagon contract, awarded last July. He accused Anthropic of attempting to seize veto power over military operations and called the company’s position fundamentally incompatible with American principles.

Anthropic’s CEO Dario Amodei called the government’s response retaliatory and punitive and promised to challenge the designation in court.

Legal scholars suggest that the AI company could have a strong case, questioning whether Hegseth can meet the statutory requirements for such a designation, which is allegedly intended to protect military systems from adversarial sabotage rather than resolving a commercial disagreement over contract terms.

Dan W. Ball, senior fellow at the American Foundation for Innovation, called the Pengaton’s move “attempted corporate murder,” arguing that Google, Amazon, and NVIDIA would have to detach themselves from Anthropic if Hegseth got his way. Amazon is Anthropic’s primary cloud computing provider, but it also uses Google’s data centers extensively. Both companies are investors in Anthropic, as is NVIDIA, which also partners with the AI company on GPU engineering. If the Pentagon’s designation restricts federal contractors from integrating Anthropic technology into defense-related systems, those partners could be required to separate or ringfence any federal-facing work involving the company.

OpenAI steps in

In a whirlwind of policy changes by the US military, the Pentagon also signed a deal with ChatGPT creator OpenAI on Friday evening, just a few hours after dropping Anthropic.

OpenAI CEO Sam Altman said the agreement preserved the same principles Anthropic had been blacklisted for defending.

The difference, according to Altman, is the enforcement mechanism. Instead of hard contractual prohibitions, OpenAI accepted the “all lawful purposes” framework but layered on architectural controls: cloud-only deployment, a proprietary safety stack the Pentagon agreed not to override, and cleared engineers embedded forward. OpenAI said these protections made the company confident that the Pentagon couldn’t cross the red lines it shares with Anthropic.

Altman reportedly said Anthropic’s approach differed because it relied on specific contract language rather than existing legal protections, adding Anthropic “may have wanted more operational control than we did.”

The morning after

The policy dispute did not immediately change how existing systems were operating. According to reporting by The Wall Street Journal and Axios, US Central Command used Anthropic’s AI during Operation Epic Fury, a coordinated US–Israeli operation targeting Iran. The outlets reported that the system was used for intelligence assessment, target analysis, and operational modeling.

Claude remained in use because it was already embedded in certain classified military systems. As a senior defense official previously told Axios:

“It will be an enormous pain in the ass to disentangle, and we are going to make sure they pay a price for forcing our hand like this.”

Hegseth announced a six-month period during which the Pentagon will pick Anthropic’s AI out of its systems.

Consumers vote with their feet

The dispute has also prompted reactions from some AI industry employees and users. More than 875 employees across Google and OpenAI signed an open letter backing Anthropic’s stance. According to the letter:

“They’re trying to divide each company with fear that the other will give in. That strategy only works if none of us know where the others stand.”

A consumer boycott, organized under the name QuitGPT, is organizing a campaign to avoid using ChatGPT, along with a protest at OpenAI’s HQ this week. Claude also rocketed to the top of Apple’s App Store over the weekend.

We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

Categories: Malware Bytes

Chrome flaw let extensions hijack Gemini’s camera, mic, and file access

Malware Bytes Security - Tue, 03/03/2026 - 7:10am

Chrome’s Gemini “Live in Chrome” panel (Gemini’s embedded, agent-style assistant mode within Chrome) had a high‑severity vulnerability tracked as CVE‑2026‑0628. The flaw let a low‑privilege extension inject code into the Gemini side panel and inherit its powerful capabilities, including local file access, screenshots, and camera/microphone control.

The vulnerability was patched in a January update. But the deeper story is that AI or agentic browsers are stepping outside long‑standing isolation boundaries, so extension abuse, prompt injection, and trusted‑UI phishing all become much more dangerous.

Chrome’s Gemini “Live in Chrome” panel runs the Gemini web app in a special, privileged side panel that can see what’s on screen and perform actions like reading local files, taking screenshots, and using the camera and microphone to automate tasks.

Researchers found that an extension using the declarativeNetRequest API (Application Programming Interface) could tamper with traffic to gemini.google.com/app when it loaded inside this side panel, not just in a normal tab.

As a result, a basic‑permission extension could inject JavaScript into a high‑privilege browser component and start camera and microphone without new consent prompts, enumerate local files and directories, take screenshots of any HTTPS site, and even turn the Gemini panel itself into a phishing UI.

Normally, extensions cannot control other extensions or core browser components, but due to this vulnerability, a low‑privilege extension could effectively drive a privileged AI assistant and inherit its powers.

And because the Gemini panel is a trusted part of the Chrome browser, users would not expect it to silently activate camera or microphone or scrape local files at an extension’s whim.

Therefore, it is good to be aware that agentic browsers, such as Gemini in Chrome, Copilot in Edge, Atlas, Comet, etc., embed an AI side panel that sees page content, keeps context, and can autonomously execute multi‑step actions like summarization, form‑filling, and automation.

These assistants need broad access to the web pages you’re looking at, including everything you see and interact with on the screen, sometimes local files, and in some designs even application data (emails, messages). That makes them an attractive “command broker” for attackers.

How to stay safe

After responsible disclosure, Google shipped fixes in early January 2026, so current versions are not vulnerable. Anything lagging that baseline is at risk and should be updated, especially if you’re using “Live in Chrome.”

Install as few extensions as possible, from vendors you can identify and contact. Prefer open‑sourced or well‑audited extensions for anything that touches sensitive workflows.

Be suspicious of sudden permission changes or unexplained new capabilities after updates.

Monitor for anomalies like cameras activating unexpectedly, unexplained screenshots, or Gemini‑related processes touching unusual file paths.

We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

Categories: Malware Bytes

Samsung TVs stop spying on viewers in Texas. Here’s how to disable ACR anywhere

Malware Bytes Security - Mon, 03/02/2026 - 10:01am

Samsung has settled a lawsuit with the Texas Attorney General over how its smart TVs collect and monetize viewing data using Automated Content Recognition (ACR). As part of the settlement, Samsung agreed to stop collecting ACR data from Texans without explicit, informed consent and to rewrite its on‑screen privacy prompts and dialogs.

Texas Attorney General (AG) Paxton stated:

“I commend Samsung for being one of the first smart TV companies in the world to make these important changes.”

The Texas AG sued Samsung and other TV makers (Hisense, Sony, LG, TCL) over ACR-based “mass surveillance programs” monitoring what people watch and building profiles used for advertising and monetization.

ACR works by:

  • Taking tiny samples of the sound or picture from what’s on your screen (a few seconds at a time).
  • Turning those samples into a kind of fingerprint (a compact pattern that uniquely represents that content).
  • Comparing that fingerprint to a giant database of known shows, movies, channels, and ads to find a match.

If it finds a match, the system knows “this TV user is watching Episode X of Show Y at time Z” or “this ad just played on this device.”

Paxton argues that customers did not meaningfully consent to this data collection, which he calls “watchware,” framing it as deliberate monitoring, rather than an accident.

Samsung also faces a federal class action in New York. Plaintiffs claim Samsung TVs track, store, and sell viewing data to companies such as Google and X (Twitter) without informed consent, in violation of the federal Video Privacy Protection Act and various state privacy laws.​

The New York complaint further alleges that Samsung’s ACR records image and audio every 500 ms regardless of source (broadcast, streaming apps, or PC monitor use), and that Samsung’s privacy notice downplays the scope of that data collection by referring to “processing” viewing history.

How to disable ACR

If you’d prefer to limit or disable ACR-style monitoring of your watching behavior, here’s where to look. Menu names may vary slightly depending on the model and year.

Samsung

Samsung has agreed to modify its consent and disclosure practices for Texas residents as part of the settlement. Users elsewhere can manually adjust these settings:

  • Press Home on the remote.
  • Go to Settings → Support → Terms & Privacy → Privacy Choices (or Settings → All Settings → General & Privacy → Terms & Privacy / Privacy Choices).
  • Turn Viewing Information Services off (this is Samsung’s ACR).
  • Optional hardening: In the same menu area, disable Interest-Based Advertising and any Voice Recognition Services if you don’t want voice data sent off‑box.
LG TVs (webOS)
  • Press Settings (gear icon).
  • Go to All Settings → General → System → Additional Settings.
  • Set Live Plus to off (this is LG’s ACR layer).
  • In the same or nearby menu, enable Limit Ad Tracking (or similar option) to reduce ad profiling.
Vizio TVs
  • Press Menu on the remote.
  • Go to System → Reset & Admin.
  • Turn Viewing Data off (this disables Vizio’s ACR and viewing logs).
Sony TVs (Google TV / Android TV)

Many Sony TVs use Samba Interactive TV as the ACR component.

  • Press Home.
  • For newer Google TV models:
    • Go to Settings → All Settings → Privacy; toggle Samba Interactive TV off.​
  • For models using usage‑diagnostics style controls:
    • Go to Settings → Device Preferences → Usage & Diagnostics and turn all reporting off.

This disables the Samba ACR integration and general telemetry used for ad/experience tuning.

Roku TVs (TCL, Hisense, etc. running Roku OS)
  • From the Roku home screen, go to Settings → Privacy.
  • Under Advertising:
    • Uncheck / toggle off Personalize ads (this stops use of your advertising ID for interest‑based ads).
    • Optionally select Reset advertising ID to rotate the ID.​
  • Under Smart TV Experience (if present):
    • Turn off Use info from TV inputs to stop ACR on HDMI and other external sources.​

We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

Categories: Malware Bytes

A fake FileZilla site hosts a malicious download

Malware Bytes Security - Mon, 03/02/2026 - 8:57am

A trojanized copy of the open-source FTP client FileZilla 3.69.5 is circulating online. The archive contains the legitimate FileZilla application, but with a single malicious DLL added to the folder. When someone downloads this tampered version, extracts it, and launches FileZilla, Windows loads the malicious library first. From that moment on, the malware runs inside what appears to be a normal FileZilla session.

Because the infected copy looks and behaves like the real software, victims may not realize anything is wrong. Meanwhile, the malware can access saved FTP credentials, contact its command-and-control server, and potentially remain active on the system. The risk does not stop with the local computer. Stolen credentials could expose the web servers or hosting accounts the user connects to.

This attack does not exploit a vulnerability in FileZilla itself. It depends on someone downloading the modified copy from an unofficial website and running it. The spread mechanism is simple deception, such as lookalike domains or search poisoning, rather than automatic self-propagation.

A growing trend of trusted software, poisoned packages

The abuse of trusted open-source utilities appears to be growing. Last month, we reported on fake 7-Zip downloads turning home PCs into proxy nodes. Security researchers have also reported that a compromised Notepad++ update infrastructure delivered a custom backdoor through DLL sideloading for several months.

Now, FileZilla has been added to the list of software impersonated in this way. A lookalike domain, filezilla-project[.]live, hosts the malicious archive.

A fake FileZilla site hosting a malicious download.

The method is straightforward: take a legitimate portable copy of FileZilla 3.69.5, place a single malicious DLL into the folder, re-zip it, and distribute the archive. The infection relies on a well-understood Windows behaviour called DLL search order hijacking, where an application loads a library from its own directory before checking the Windows system folder.

One file, one timestamp, one giveaway

The archive contains 918 entries. Of those, 917 carry a last-modified date of 2025-11-12, consistent with an official FileZilla 3.69.5 portable release. One entry stands out: version.dll, dated 2026-02-03, nearly three months newer than everything else in the archive.

A clean FileZilla portable distribution does not include a version.dll. The legitimate DLLs in the package are all FileZilla-specific libraries such as libfilezilla-50.dll and libfzclient-private-3-69-5.dll. The Windows Version API library—version.dll—is a system DLL that lives in C:\Windows\System32 and has no reason to be inside a FileZilla folder. Its presence is the entire attack.

Caught in the act: what Process Monitor showed us

We confirmed the sideloading on a live system using Process Monitor. When filezilla.exe starts, it needs to load a series of DLLs. For each one, Windows checks the application’s own directory first, then falls back to the system folder.

For system libraries like IPHLPAPI.DLL and POWRPROF.dll, the application directory returns NAME NOT FOUND, so Windows loads the legitimate copies from C:\Windows\System32. This is normal behaviour. But for version.dll, the trojanized copy is sitting in the FileZilla folder. Windows finds it there, maps it into memory, and never reaches System32. The malicious code now runs inside filezilla.exe‘s own process.

Seventeen milliseconds after loading, the malicious DLL searches for version_original.dll in the same directory and gets NAME NOT FOUND. This is a telltale sign of DLL proxying, a technique where the malicious DLL is designed to forward legitimate function calls to a renamed copy of the original library so the host application keeps working normally. In this case, the renamed original was not included in the archive, which may contribute to application instability.

FileZilla calls LoadLibrary with just the DLL filename rather than the full path, so Windows searches the application’s own directory first — exactly the behaviour attackers need to plant a malicious DLL. This is a common design behavior.

Built to detect analysis environments

The DLL includes multiple checks designed to detect virtual machines and sandboxes before executing its payload. Behavioural analysis reveals BIOS version checks, system manufacturer queries, VirtualBox registry key probing, disk drive enumeration, and memory allocation using write-watch, which is a technique that can detect memory scanning by analysis tools. Evasive sleep loops round out the anti-analysis toolkit.

These checks are selective rather than absolute. In sandboxed environments that closely resembled real user systems, the loader successfully resolved its C2 domain and attempted callbacks. In more obviously virtualised setups, it went dormant, producing no network activity beyond routine Windows DNS queries. On our own test system, FileZilla terminated almost immediately after launch, consistent with the DLL detecting the environment and killing the host process before reaching its network stage.

DNS-over-HTTPS: phoning home where nobody is listening

When the loader determines the environment is safe, it does not use traditional DNS to resolve its command-and-control domain. Instead, it sends an HTTPS request to Cloudflare’s public resolver:

https://1.1.1.1/dns-query?name=welcome.supp0v3[.]com&type=A

This technique, DNS-over-HTTPS, or DoH, bypasses corporate DNS monitoring, DNS-based blocklists, and security appliances that inspect traffic on port 53. It is the same evasion approach used in last month’s fake 7-Zip proxyware campaign.

Once the domain resolves, the loader calls back to its staging server. Memory analysis of the loader process uncovered the full configuration embedded at runtime:

{

"tag":"tbs",

"referrer":"dll",

"callback":"https://welcome.supp0v3.com/d/callback?utm_tag=tbs2&utm_source=dll"

}

The UTM-style campaign tracking suggests a structured operation with multiple distribution vectors. The tbs2 tag and dll source identifier likely differentiate this DLL sideloading distribution from other delivery methods within the same operation.

A second C2 channel on a non-standard port

Beyond the DoH callback, the malware also reaches out to 95.216.51.236 on TCP port 31415—a non-standard port on Hetzner-hosted infrastructure. Network capture shows ten connection attempts across two sessions, suggesting a persistent retry mechanism designed to maintain contact with its operator. The use of a high, non-standard port is a common technique for blending C2 traffic past firewalls that only inspect well-known service ports.

What the behavioural analysis flagged

Automated behavioural analysis of the archive flagged several additional capabilities beyond what we observed directly. Behavioural rules flagged credential harvesting from local FTP client software. Given that the malware is sideloaded by FileZilla, some of these detections may reflect FileZilla’s own legitimate access to its credential store, though the combination with the C2 callback infrastructure makes a benign explanation unlikely.

Additional behavioral indicators included:

• Creating suspended processes and writing to other processes’ memory
• Runtime .NET compilation via csc.exe
• Registry modifications consistent with autorun persistence
• Multiple file encryption API calls

Taken together, these behaviors suggest a multifunctional implant capable of credential theft, process injection, persistence, and potentially data encryption.

What to do if you may have been affected

Be cautious about where you download software. DLL sideloading is not new, and this campaign shows how adding a single malicious file to an otherwise legitimate archive can compromise a system. We have recently seen similar tactics involving fake 7-Zip downloads and other compromised distribution channels. Treat software downloaded outside official project domains with the same caution as unexpected email attachments.

  • Check for version.dll inside any FileZilla portable directory on your system. A legitimate FileZilla distribution does not ship this file. If present, treat the system as compromised.
  • Download FileZilla only from the official project domain at filezilla-project.org and verify the download hash against the value published on the site.
  • Monitor for DNS-over-HTTPS traffic from non-browser processes. Outbound HTTPS connections to known DoH resolvers such as 1.1.1.1 or 8.8.8.8 from applications that have no business making web requests should be investigated.
  • Block the domains and IP addresses listed in the IOC section below at your network perimeter.
  • Inspect zip archives for timestamp anomalies before extracting and running applications. A single file with a different modification date from the rest of the archive is a simple but effective red flag.

Malwarebytes detects and blocks known variants of this threat.

Indicators of Compromise (IOCs)

File hashes (SHA-256)

  • 665cca285680df321b63ad5106b167db9169afe30c17d349d80682837edcc755 — trojanized FileZilla archive (FileZilla_3.69.5_win64.zip)
  • e4c6f8ee8c946c6bd7873274e6ed9e41dec97e05890fa99c73f4309b60fd3da4 — trojanized version.dll contained in the archive

Domains

  • filezilla-project[.]live
  • welcome.supp0v3[.]com — C2 callback and staging

Network indicators

  • 95.216.51[.]236:31415 — C2 server

We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

Categories: Malware Bytes

Purchase order attachment isn’t a PDF. It’s phishing for your password

Malware Bytes Security - Mon, 03/02/2026 - 3:59am

An attachment named New PO 500PCS.pdf.hTM, posing as a purchase order in PDF form, turned out to be something entirely different: a credential-harvesting web page that quietly sent passwords and IP/location data straight to a Telegram bot controlled by an attacker.

Imagine you’re in accounts payable, sales, or operations. Your day is a steady flow of invoices, purchase orders, and approvals. An email like this may look like just another item in your daily queue.

“Dear Seller
I hope this message finds you well!
I am interested in purchasing this product and I would appreciate it if you could provide me with a quotation for the following attached below:
Quantity: [f16940-500PCS]
Any specific specifications or details, if applicable
Additionally, I would like to inquire about the estimated delivery time once the order is confirmed. Kindly include your usual delivery schedule and any relevant terms.
Please let me know the total cost, including any applicable taxes or fees, and any other relevant terms.
Thank you very much for your assistance. I look forward to your prompt response.”

What immediately jumps out is the double file extension. Attachments with extensions like .pdf.htm are classic phishing tactics. These files are usually disguised as documents (PDF), but they’re actually HTML files that open in a browser and can contain malicious scripts or phishing forms.

But let’s suppose you didn’t notice that. What happens when you open the attachment?

You’re shown a password prompt in front of a blurred background. The recipient’s email address is already filled in. In the background, the phishing script grabs some environment details—IP, geolocation, and user agent—and sends them to the attacker along with any details you filled out.

After a short “Verifying…” message, you get a familiar-looking error:

“Your account or password is incorrect. Try again.”

This is a psychological trick:

  • It’s believable (typos happen).
  • It encourages a second password attempt, perhaps to try to harvest another, different password.

You type your password again and click Next and this one appears to be accepted.

Instead of opening a real document, you’re redirected to a blurry image that looks like an invoice hosted on ibb[.]co. That’s a shortened domain for ImgBB, a legitimate image-hosting and sharing service. That unexpected image may confuse you just enough to stop you from immediately changing your credentials or immediately alerting your IT department.


Rather than emailing stolen credentials or logging them on a server that might be blocked by security software, the page sends them using a Telegram bot. The attacker receives:

  • Email and password combination
  • IP and geolocation
  • Browser and operating system details

Telegram is encrypted, widely used, and often not blocked by organizations, which makes it a popular command and control (C2) channel for phishers.

The unobfuscated SendToTelegram function

As unprofessional as this phishing attempt may look, each victim sending actual login details to the phisher is a win on a near-to-zero investment. For the target, it can turn into a nightmare ranging from having to change passwords to a compromised Acrobat or other account, which can then be used and sold for more serious attacks.

How to stay safe

The good news: once you know what to look for, these attacks are much easier to spot and block. The bad news: they’re cheap, scalable, and will continue to circulate.

So, the next time a “PDF” asks for your password in a browser, pause to think about what might be hiding under the hood.

Beyond avoiding unsolicited attachments, here are a few ways to stay safe:

  • Only access your accounts through official apps or by typing the official website directly into your browser.
  • Check file extensions carefully. Even if a file looks like a PDF, it may not be.
  • Enable multi-factor authentication for your critical accounts.
  • Use an up-to-date, real-time anti-malware solution with a web protection module.

Pro tip: Malwarebytes Scam Guard recognized this email as a scam.

We don’t just report on scams—we help detect them

Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard. Submit a screenshot, paste suspicious content, or share a link, text or phone number, and we’ll tell you if it’s a scam or legit. Available with Malwarebytes Premium Security for all your devices, and in the Malwarebytes app for iOS and Android.

Categories: Malware Bytes

Public Google API keys can be used to expose Gemini AI data

Malware Bytes Security - Fri, 02/27/2026 - 7:33am

Google Maps/Cloud API (Application Programming Interface) keys that used to be safe to publish can now, in many cases, be used as real Gemini AI credentials. This means that any key sitting in public JavaScript or application code may now let attackers connect to Gemini through its API, access data, or run up someone else’s bill.

Researchers found around 2,800 live Google API keys in public code that can authenticate to Gemini, including keys belonging to major financial, security, recruiting firms, and even Google itself.

Historically, Google Cloud API keys for services like Maps, YouTube embeds, Firebase, etc., were treated as non‑secret billing identifiers, and Google’s own guidance allowed embedding them in client‑side code.

If we compare this issue to reusing your password across different sites and platforms, we see that using a single identifier can become a skeleton key to more valuable assets than users or developers ever intended. 

The key difference is where responsibility sits. With password reuse, end users are explicitly warned. Every service tells them to pick unique passwords, and the security community has hammered this message for years. If the same password is reused across three sites and one breach compromises all of them, the risk comes from a user decision, even if convenience drove that decision.

With Google API keys, developers and security teams were following Google’s own historical guidance that these keys were just billing identifiers safe for client‑side exposure. When Gemini was turned on, those old API keys suddenly worked as real authentication credentials.

From an attacker’s perspective, password reuse means you can take one credential stolen from a weak site and replay it against email, banking, or cloud accounts using credential stuffing. The Gemini change means a key originally scoped in everyone’s mental model as “just for Maps” now works against an AI endpoint that may be wired into documents, calendars, or other sensitive workflows. It can also be abused to burn through someone’s cloud budget at scale.

How to stay safe

The difference with this instance of what is effectively password reuse is that this time it’s been effectively baked in by design rather than chosen by users.

The core problem is that Google uses a single API key format for two fundamentally different purposes: public identification and sensitive authentication. The Gemini API inherited a key management architecture built for a different purpose.

The researchers say Google has recognized the problem they reported and took meaningful steps, but have yet to fix the root cause.

Advice for developers

Developers should check whether Gemini (Generative Language API) is enabled on their projects and audit all API keys in their environment to determine if any are publicly exposed and rotate them immediately.

  • Check every Google Cloud Platform (GC project for the Generative Language API. Go to the GCP console, navigate to APIs & Services > Enabled APIs & Services, and look for the Generative Language API. Do this for every project in your organization. If it’s not enabled, you’re not affected by this specific issue.
  • If the Generative Language API is enabled, audit your API keys. Navigate to APIs & Services > Credentials. Check each API key’s configuration. You’re looking for two types of keys:
    • Keys showing a warning icon, meaning they are set to unrestricted
    • Keys that explicitly list the Generative Language API in their allowed services

Either configuration allows the key to access Gemini.

  • Verify that none of those keys are public. This is the critical step. If you find a key with Gemini access embedded in client-side JavaScript, checked into a public repository, or otherwise exposed online, you have a problem. Start with your oldest keys first. Those are the most likely to have been deployed publicly under the old guidance that API keys are safe to share, and then retroactively gained Gemini privileges when someone on your team enabled the API. If you find an exposed key, rotate it.
Advice for individuals

For regular users, this is less about key management and more about keeping your Google account locked down and being cautious about third-party access.

  • Only link Gemini to accounts or data stores (Drive, Mail, Calendar, enterprise systems) you’re comfortable being reachable via API and regularly review which integrations and third‑party apps have access to your Google account.
  • When evaluating apps that integrate Gemini (browser extensions, SaaS tools, mobile apps), favour those that make Gemini calls from their backend rather than directly from your browser.
  • If you use Gemini via a Google Cloud project (e.g., you’re a power user or use it for work), monitor GCP billing reports and usage logs for unusual Gemini activity, especially spikes that do not match your own usage.

We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

Categories: Malware Bytes

Inside a fake Google security check that becomes a browser RAT

Malware Bytes Security - Fri, 02/27/2026 - 6:29am

A website styled to resemble a Google Account security page is distributing what may be one of the most fully featured browser-based surveillance toolkits we have observed in the wild.

Disguised as a routine security checkup, it walks victims through a four-step flow that grants the attacker push notification access, the device’s contact list, real-time GPS location, and clipboard contents—all without installing a traditional app.

For victims who follow every prompt, the site also delivers an Android companion package introduces a native implant that includes a custom keyboard (enabling keystroke capture), accessibility-based screen reading capabilities, and permissions consistent with call log access and microphone recording.

The infrastructure uses a single command-and-control domain, google-prism[.]com. The domain is routed through Cloudflare’s content delivery network, a service widely used by both legitimate and malicious sites.

A security page without an address bar

The attack begins with what appears to be a genuine Google Account security alert. It does not rely on an exploit or browser bug. It relies on you believing you are responding to Google.

When installed as a PWA (a Progressive Web App, essentially a website that pins to the home screen and runs in its own window), the browser address bar disappears. The victim sees what looks and feels like a native Google app.

In testing, we were guided through four steps, each framed as a protective action.

  • The user is prompted to “install” the security tool as a PWA.
  • The site requests notification permissions, framed as enabling “security alerts.” Web push notifications give the attacker a persistent communication channel that can function even when the PWA is not actively open.
  • The site uses the Contact Picker API—a legitimate browser feature designed for sharing contacts with web apps. The victim is prompted to select contacts for sharing. After selection, the interface displays confirmation text such as “X contacts protected,” framing the step as a security check. However, network analysis shows the selected contacts are sent directly to the attacker-controlled domain.
  • The site requests GPS location under the guise of “verifying your identity from a trusted location.” Latitude, longitude, altitude, heading, and speed are all exfiltrated.
What happens after you close the tab

When the victim installs the PWA and grants permissions, two separate pieces of code go to work. Understanding which does what explains why closing the tab is not enough.

The page script runs as long as the app is open. It attempts to read the clipboard on focus and visibility-change events, looking for one-time passwords and cryptocurrency wallet addresses. It tries to intercept SMS verification codes via the WebOTP API on supported browsers, builds a detailed device fingerprint, and polls /api/heartbeat every 30 seconds, waiting for the operator to send commands.

The service worker is the part that survives if you close the tab.

It sits underneath the page, handling push notifications, running background tasks embedded in push payloads, and queuing stolen data locally when the device goes offline, then flushing that queue the moment connectivity returns. It includes handlers for background and periodic sync events, allowing it to wake and execute tasks where those features are supported and registered.

Close the browser tab and the page script stops. Clipboard monitoring and SMS interception end immediately.

But the service worker remains registered. If the victim granted notification permissions, the attacker can still wake it silently, push a new task, or trigger a data upload without reopening the app.

And if the victim ever opens it again, collection resumes instantly.

Your browser, their proxy

Perhaps the most concerning capability is the WebSocket relay. Once connected, the attacker can route arbitrary web requests through the victim’s browser as if they were browsing from the victim’s own network.

The malware acts as an HTTP proxy, executing fetch requests with whatever method, headers, credentials, and body the attacker specifies, then returns the full response including headers.

This means:

  • If the victim is on a corporate network, internal resources could become reachable
  • IP-based access controls can be bypassed
  • The attacker’s traffic appears to originate from the victim’s residential IP address

The toolkit also includes a port scanner that sweeps internal network ranges (by default, all 254 addresses on the local subnet across ports 80, 443, and 8080) using a timing-based technique to identify live hosts all from within the browser sandbox.

In addition, the attacker can execute arbitrary JavaScript on the victim’s device via a remote eval command sent over the WebSocket.

Stolen data never disappears

The toolkit is engineered to tolerate poor connectivity. When the device is offline, captured data—clipboard captures, location updates, intercepted OTPs—is queued in the browser’s Cache API, stored as individual entries under keys like /exfil/{timestamp}-{random}.

When connectivity returns, a Background Sync event replays every queued item to the server. Each entry is deleted only after the server confirms receipt.

On Chromium-based browsers, the service worker includes a handler for Periodic Background Sync under the tag c2-checkin, enabling scheduled wake-ups where the feature is supported and activated. Combined with push-triggered heartbeats, this means the attacker can maintain contact with a compromised device for as long as the PWA remains installed, which could be weeks or months.

When the browser isn’t enough: the native implant

For victims who follow every prompt, the web layer delivers a second payload: an Android APK disguised as a “critical security update.”

The download page claims it is “Version 2.1.0 · 2.3 MB · Verified by Google.”

The actual file is a 122 KB package named com.device.sync, labeled “System Service” in the app drawer.

The APK requests 33 Android permissions, including high-risk privileges such as SMS access, call log access, microphone access, contacts access, and accessibility service control.

It includes:

  • A custom keyboard capable of capturing keystrokes
  • A notification listener that can read incoming notifications, including potential two-factor codes
  • An accessibility service that can observe screen content and perform actions in other apps
  • An autofill service positioned to intercept credential fill requests

The web layer’s “Enable Autofill” screen is designed to guide the victim through turning on this malicious autofill service in Android settings.

To enhance persistence, the APK registers as a device administrator (which can complicate uninstallation), sets a boot receiver to execute on startup, and schedules alarms intended to restart components if terminated. The application includes components consistent with overlay-based UI capabilities, suggesting potential use for phishing or credential interception overlays. A FileProvider component is present, consistent with staged update delivery. Whether updates can be installed silently depends on device privilege level and policy configuration.

What to do if you may have been affected

This campaign shows how attackers can abuse legitimate browser features through social engineering rather than exploiting a vulnerability in Google’s systems.

Instead of using a web page merely to deliver a traditional executable, the operators turn the browser itself into a surveillance platform. The PWA layer alone—without any native installation—can harvest contacts, intercept one-time passwords, track GPS location, scan internal networks, and proxy traffic through the victim’s device. The Android APK extends those capabilities to keystroke capture, accessibility-based screen monitoring, and broader device-level surveillance through high-privilege permissions.

What makes this dangerous is that each permission request is presented as a security measure. Victims are not bypassing warnings; they are responding to what appears to be a legitimate security alert. The social engineering is central to how the activity works.

Google does not conduct security checkups through unsolicited pop-up pages. If you receive an unexpected “security alert” asking you to install software, enable notifications, or share contacts, close the page. Legitimate account security tools are accessed directly through your Google Account at myaccount.google.com.

Follow the steps below to review permissions and remove the malicious site.

On Android
  • Check your installed apps and home screen for a “Security Check” PWA. On Android, go to Settings > Apps and look for it. Uninstall it immediately.
  • Check for an app called “System Service” with the package name com.device.sync. If device administrator access is enabled, revoke it first under Settings > Security > Device admin apps before uninstalling.
  • Change passwords for any accounts where you used two-factor authentication via SMS or copied passwords to the clipboard while the malware was present.
  • Revoke notification permissions for any web apps you do not recognise. In Chrome on Android: Settings > Site Settings > Notifications.
  • Review your autofill settings. If an unknown autofill service was enabled, remove it under Settings > Passwords & autofill > Autofill service.
  • If the native APK was installed, consider a factory reset. The malware registers as a device administrator and implements multiple persistence mechanisms. If removal fails or device administrator privileges cannot be revoked, a factory reset may be necessary.
  • Run a scan with reputable mobile security software to detect any remaining components.
On Windows (Chrome, Edge, and other Chromium browsers)
  • Uninstall the PWA. In Chrome, click the three-dot menu and go to Installed apps (or visit chrome://apps). Right-click the “Security Check” app and select Remove. In Edge, go to edge://apps and do the same.
  • Unregister the service worker. Navigate to chrome://serviceworker-internals (or edge://serviceworker-internals) and look for any entry associated with the malicious domain. Click Unregister to remove it. If the PWA remains installed or push permissions are still granted, the service worker may continue to receive push-triggered events in the background.
  • Revoke notification permissions. Go to chrome://settings/content/notifications (or edge://settings/content/notifications) and remove any site you do not recognise from the Allowed list.
  • Clear site data for the malicious origin. In Chrome: Settings > Privacy and security > Site settings > View permissions and data stored across sites. Search for the domain and click Delete data. This removes cached files, the offline exfiltration queue, and any stored configuration.
  • Check for suspicious browser extensions. While this particular toolkit does not use an extension, victims who followed attacker instructions may have installed additional components. Review chrome://extensions or edge://extensions and remove anything unfamiliar.
  • Reset browser sync if clipboard or password data may have been compromised. If you sync passwords across devices, change your Google or Microsoft account password first, then review saved passwords for any you did not create.
  • Run a full system scan. While this threat is primarily browser-resident on Windows, the remote eval capability means additional payloads could have been delivered during the compromise window.
On Firefox (desktop and Android)

Firefox does not support PWA installation, the Contact Picker API, WebOTP, or Background Sync so much of this toolkit simply will not function. However, Firefox does support service workers and push notifications, meaning the notification-based C2 channel could still operate if a victim granted permissions. Clipboard monitoring would depend on page execution context and user interaction events, and is not guaranteed in background scenarios on Firefox.

  • Revoke notification permissions. Go to Settings > Privacy & Security > Permissions > Notifications > Settings, and remove any unfamiliar entries.
  • Remove the service worker. Navigate to about:serviceworkers and click Unregister next to any entry you do not recognise.
  • Clear site data. Go to Settings > Privacy & Security > Cookies and Site Data > Manage Data, search for the domain, and remove it. This wipes cached content and any queued exfiltration data.
  • On Firefox for Android, also check about:config is not accessible and review any home screen shortcuts that may have been added manually. Firefox on Android does allow “Add to Home screen” even without full PWA support.
On Safari (macOS and iOS)

Safari on iOS 16.4 and later supports PWA installation (“Add to Home Screen”) and push notifications, so the core phishing flow and notification-based C2 channel can work. However, Safari does not support the Contact Picker API, WebOTP, or Background Sync, which limits the toolkit’s passive surveillance capabilities.

  • Remove the PWA from your home screen. Long-press the Security Check icon and tap Remove App (or Delete Bookmark on older iOS versions).
  • Revoke notification permissions. On iOS: Settings > Safari > Notifications (or Settings > Notifications, and look for the PWA by name). On macOS: System Settings > Notifications > Safari.
  • Clear website data. On iOS: Settings > Safari > Advanced > Website Data, search for the domain, and delete it. On macOS: Safari > Settings > Privacy > Manage Website Data.
  • On macOS, also check Safari > Settings > Extensions for anything unfamiliar, and review any Login Items under System Settings > General > Login Items & Extensions.
Indicators of Compromise (IOCs)

File hashes (SHA-256)

  • 1fe2be4582c4cbce8013c3506bc8b46f850c23937a564d17e5e170d6f60d8c08  (sync.apk)

Domains

  • google-prism[.]com

We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

Categories: Malware Bytes

Fake Zoom and Google Meet scams install Teramind: A technical deep dive

Malware Bytes Security - Thu, 02/26/2026 - 5:40pm

On February 24, 2026, we published an article about how a fake Zoom meeting “update” silently installs monitoring software, documenting a campaign that used a convincing fake Zoom waiting room to push a legitimate Teramind installer abused for unauthorized surveillance onto Windows machines. Teramind has stated they are not affiliated with the threat actors described, did not deploy the software referenced, and condemn any unauthorized misuse of commercial monitoring technologies. 

Following publication of our findings, the malicious domain was reported to its domain name registrar, Namecheap, which confirmed it suspended the service. Despite the takedown, our continued monitoring shows the campaign is not only still active but growing: we have now identified a parallel operation impersonating Google Meet, running from a different domain and infrastructure. 

In this article, we’ll provide the deeper technical analysis behind both variants, cataloguing the use of Teramind instance IDs by scammers that we have directly observed or collected from sandbox repositories, document our hands-on detonation of the installer in a controlled environment, and answer a question that emerged during our research: How can a single, identical Windows installer package serve several different attacker accounts? 

The campaign expands to Google Meet 

While the original Zoom-themed site at uswebzoomus[.]com was taken down by Namecheap following community reporting, a second site at googlemeetinterview[.]click is actively deploying the same payload using an identical playbook adapted for Google Meet. 

The Google Meet variant presents a fake Microsoft Store page branded as “Google Meet for Meetings” published by “Google Meet Video Communications, Inc,” which is a fabricated entity. A “Starting download…” button is displayed while the MSI file is silently delivered via the path /Windows/download.php. The referring page is /Windows/microsoft-store.php, confirming the fake Microsoft Store screen is served by the attacker’s infrastructure, not by Microsoft. 

Our Fiddler traffic capture of the Google Meet variant shows the response header:

Content-Disposition: attachment; filename="teramind_agent_x64_s-i(__06a23f815bc471c82aed60b60910b8ec1162844d).msi".  

Unlike the Zoom variant, where the filename was disguised as a Zoom component, this variant does not even attempt to hide the scammer’s use of Teramind in the filename. We verified both files are byte-for-byte identical (MD5: AD0A22E393E9289DEAC0D8D95D8118B5), confirming a single binary is being served across both campaigns with only the filename changed. 

Infrastructure differences between variants 

Despite using the same payload, the two variants are hosted on different infrastructure. The Zoom variant at uswebzoomus[.]com ran on Apache/2.4.58 (Ubuntu) and was registered through Namecheap on 2026-02-16. The Google Meet variant at googlemeetinterview[.]click runs on a LiteSpeed server.  

Both serve the download via PHP scripts and use the same fake Microsoft Store redirect pattern, but the switch in web server and domain registrar suggests the operator anticipated takedowns and pre-positioned fallback infrastructure. 

One binary, many identities. How the installer reads its own filename 

During our investigation, we identified 14 distinct MSI filenames sharing the same SHA-256 hash. Of these, two were directly captured from malicious infrastructure through our malware domain analysis: the Zoom variant from uswebzoomus[.]com and the Google Meet variant from googlemeetinterview[.]click. The remaining filenames were sourced from sandbox repositories.  

It is important to note that some of these sandbox-sourced filenames may represent legitimate corporate Teramind deployments rather than malicious activity. Teramind is a commercial product with lawful enterprise use cases, and files submitted to sandbox services do not necessarily indicate abuse. Nevertheless, they all share the same binary and demonstrate the same filename-based configuration mechanism.  

Every file shares the same SHA-256 hash: 644ef9f5eea1d6a2bc39a62627ee3c7114a14e7050bafab8a76b9aa8069425fa. This raised an immediate question: if the Teramind instance ID changes with every filename, but the binary is byte-for-byte identical, where is the ID actually stored? 

The answer lies in a .NET custom action embedded inside the MSI. Our behavioral analysis reveals the following sequence: 

Calling custom action Teramind.Setup.Actions!Teramind.Setup.Actions.CustomActions.ReadPropertiesFromMsiName 

PROPERTY CHANGE: Modifying TMINSTANCE property. Its current value is 'onsite'. Its new value: '__941afee582cc71135202939296679e229dd7cced'. 

PROPERTY CHANGE: Adding TMROUTER property. Its value is 'rt.teramind.co'. 

The MSI ships with a default TMINSTANCE value of onsite. This is the standard Teramind on-premise default. At install time, the ReadPropertiesFromMsiName custom action parses the installer’s own filename, extracts the 40-character hex string from the s-i(__) portion, and overwrites the default with the attacker-specific instance ID. 

The log also shows the message Failed to get router from msi name—meaning the installer attempted to extract a C2 server address from the filename but could not. In this case, it falls back to the default value rt.teramind.co, which is preconfigured inside the MSI. However, TMROUTER is an exposed MSI property, meaning it could potentially be overridden at install time or changed in a different build. The filename in this campaign carries only the instance ID; the C2 destination is determined by the MSI’s default configuration. 

Live detonation: what the installer actually does on a real system 

To go beyond sandbox-based behavioral analysis, we detonated the MSI installer in an isolated Windows 10 virtual machine with verbose MSI logging enabled, ApateDNS for DNS interception, and Fiddler for network monitoring. This hands-on analysis revealed several critical behaviors not visible in automated sandbox reports. 

Installation chain and the CheckHosts gate 

The MSI installer progresses through four .NET custom actions in sequence, all executed via the WiX Toolset’s zzzzInvokeManagedCustomActionOutOfProc mechanism: 

  • ReadPropertiesFromMsiName: Parses the MSI’s own filename to extract the Teramind instance ID and overwrites the default onsite value 
  • CheckAgent: Determines whether a Teramind agent is already installed on the machine 
  • ValidateParams: Validates the extracted configuration parameters 
  • CheckHosts: Performs a pre-flight connectivity check against the C2 server rt.teramind.co 

        The CheckHosts action is a hard gate: if the installer cannot reach the Teramind server, installation aborts with error code 1603. Our initial detonation attempt in a network-isolated VM failed at exactly this point: 

        TM: TMINSTANCE = __941afee582cc71135202939296679e229dd7cced 

        TM: TMROUTER = rt.teramind.co 

        CustomAction CheckHosts returned actual error code 1603 

        This behavior is significant for two reasons. First, it reveals the C2 server address: rt.teramind.co. Second, it means that victims on corporate networks with restrictive DNS or outbound filtering may be inadvertently protected. The installer will silently fail if it cannot phone home during installation. However, the MSI does support a TMSKIPSRVCHECK property that can bypass this check, and its default value is no. 

        To complete our analysis, we added rt.teramind.co to the Windows hosts file pointing to localhost, allowing the DNS resolution to succeed and the CheckHosts action to pass. The installation then completed successfully. 

        Stealth mode confirmed 

        The successful installation log confirms what the original article suspected: Teramind’s stealth mode (called Hidden Agent, a deployment option that runs silently in the background) is enabled by default in this build. The MSI property dump shows: 

        Property(S): TMSTEALTH = 1 

        This confirms the agent installs with no taskbar icon, no system tray entry, and no visible entry in the Windows Programs list. The victim has no visual indication that monitoring software is running. 

        Two services, not one 

        The install log reveals the campaign deploys two persistent services, not just the one documented in our original article: 

        Service Name Display Name Binary Start Type tsvchst Service Host svc.exe –service Automatic (boot) pmon Performance Monitor pmon.exe Manual (demand) 

        Both service names are chosen to blend in: tsvchst mimics the legitimate Windows svchost.exe naming pattern, while pmon with the display name “Performance Monitor” mimics the built-in Windows Performance Monitor. Both run as LocalSystem, the highest privilege level on a Windows machine. 

        Both services are configured with aggressive failure recovery: restart on first failure, restart on second failure, and restart on subsequent failures, with delays of 160 seconds (tsvchst) and 130 seconds (pmon). This means even if a user or security tool terminates the service, it automatically restarts within minutes. 

        Live C2 callback observed 

        Immediately after installation, ApateDNS captured the agent phoning home. DNS queries for rt.teramind.co appeared within seconds of the service starting, confirming the agent begins its callback cycle immediately. The queries repeated at approximately 11-second intervals, showing a persistent polling pattern.  

        In a real-world scenario where the victim has internet connectivity, these would resolve to Teramind’s infrastructure and the agent would begin transmitting captured data. 

        Full MSI configuration surface 

        The verbose installation log exposes every configurable parameter the MSI supports through its SecureCustomProperties list. This reveals the installer’s full configuration surface: 

        • TMSTEALTH — Stealth mode (set to 1 in this build) 
        • TMINSTANCE — Account identifier (extracted from filename) 
        • TMROUTER — C2 server address (hardcoded to rt.teramind.co) 
        • TMENCRYPTION — C2 communication encryption toggle 
        • TMSOCKSHOST / TMSOCKSPORT / TMSOCKSUSER / TMSOCKSPASSWORD — Built-in SOCKS5 proxy support for tunneling C2 traffic through proxies 
        • TMHTTPPROXY — HTTP proxy support 
        • TMSKIPSRVCHECK — Skip the C2 connectivity pre-flight check 
        • TMNODRV / TMNOFSDRV — Disable kernel filter drivers 
        • TMNOIPCCLIPBOARD — Clipboard monitoring toggle 
        • TMNOREMOTETS — Remote terminal services monitoring toggle 
        • TMHASHUSERNAMES — Anonymize/hash captured usernames 
        • TMDISABLESCREEN — Disable screenshot capture 
        • TMADDENTRYTOARP — Add/remove entry from Add/Remove Programs (off in stealth) 
        • TMCRASHUPLOADURL — Crash telemetry upload endpoint 
        • TMREVEALEDPASSWORDLESS — Toggle for passwordless reveal functionality 

        The SOCKS5 proxy support is particularly noteworthy. It means the agent can be configured to route all surveillance data through an attacker-controlled proxy, making network-level detection significantly harder by disguising C2 traffic as legitimate proxy traffic. 

        Observed Teramind instance IDs 

        The following table lists every MSI filename and corresponding Teramind instance ID we have collected. Of these, two were directly observed in the wild through our own malware domain analysis: the Zoom variant (941afee…7cced, captured from uswebzoomus[.]com) and the Google Meet variant (06a23f8…2844d, captured from googlemeetinterview[.]click). The remaining filenames were sourced from sandbox repositories.  

        As noted above, some of these may represent legitimate enterprise deployments rather than malicious use. All files share the same SHA-256 hash. Two filenames share the same instance ID c0cea71…0a6d7, indicating the same attacker account was used across multiple filename variations. 

        MSI Filename Instance ID zoom_agent_x64_s-i(__941afee582cc71135202939296679e229dd7cced).msi 941afee582cc71135202939296679e229dd7cced ZoomApp_agent_x64_s-i(__fca21db2bb0230ee251a503b021fe02d2114d1f0).msi fca21db2bb0230ee251a503b021fe02d2114d1f0 945bd48ad7552716f4583_s-i(__d72c88943945bd48ad7552716f4583ada0b7c2a6).msi d72c88943945bd48ad7552716f4583ada0b7c2a6 teramind_agent_x64_s-i(__572d85bb94f4f59ef947c3faf42677f9adb223c3).msi 572d85bb94f4f59ef947c3faf42677f9adb223c3 file_agent_x64_s-i(__f76fee1df21e19d93d5842f50c375286477b3f6c).msi f76fee1df21e19d93d5842f50c375286477b3f6c teramind_agent_x64_s-i(__653d105a51cc886dede8101d1b0cd02e20329546).msi 653d105a51cc886dede8101d1b0cd02e20329546 e411293f92e8730f717_s-i(__c0cea713de411293f92e8730f71759aa1890a6d7).msi c0cea713de411293f92e8730f71759aa1890a6d7 0154299765aa7b198bce97d8361_s-i(__c0cea713de411293f92e8730f71759aa1890a6d7).msi c0cea713de411293f92e8730f71759aa1890a6d7 GoogleMeet_agent_x64_s-i(__ab28818c0806ce7996c10c59b0e4e5d102783461).msi ab28818c0806ce7996c10c59b0e4e5d102783461 teramind_agent_x64_s-i(__5ca3d9dd35249200363946b1f007b59f88dbde39).msi 5ca3d9dd35249200363946b1f007b59f88dbde39 file_agent_x64_s-i(__81c39bed817fc9989834c81352cb7f69b94342da).msi 81c39bed817fc9989834c81352cb7f69b94342da GoogleMeet_agent_x64_s_i_94120be3942474019852c62041d2f373fdb11a0e.msi 94120be3942474019852c62041d2f373fdb11a0e AdobeReader_agent_x64_s-i(__d57d34e76cc8c2c883cbdcb42a14c47d00be03c0).msi d57d34e76cc8c2c883cbdcb42a14c47d00be03c0 teramind_agent_x64_s-i(__06a23f815bc471c82aed60b60910b8ec1162844d).msi 06a23f815bc471c82aed60b60910b8ec1162844d 

        The variety of filename prefixes is notable: zoom_agent, ZoomApp_agent, GoogleMeet_agent, AdobeReader_agent, teramind_agent, and file_agent. This suggests the campaign extends beyond video conferencing impersonation.  

        However, the AdobeReader-branded variant was found only in sandbox repositories and may represent testing or planned expansion rather than an active deployment. The filenames with generic prefixes like teramind_agent and file_agent similarly appear to be sandbox submissions that retained the default naming rather than a brand-specific social engineering lure. 

        Indicators of Compromise 

        File hashes 

        SHA-256: 644ef9f5eea1d6a2bc39a62627ee3c7114a14e7050bafab8a76b9aa8069425fa 

        MD5: AD0A22E393E9289DEAC0D8D95D8118B5 

        Domains 

        • uswebzoomus[.]com (Zoom variant: taken down by Namecheap) 
        • googlemeetinterview[.]click (Google Meet variant: active as of 2026-02-26) 
           
        Detection and defense recommendations 
        • Alert on the ProgramData GUID directory {4CEC2908-5CE4-48F0-A717-8FC833D8017A}. This GUID is fixed across all observed variants. 
        • Query for both services: sc query tsvchst and sc query pmon. Either running on a non-corporate machine confirms active surveillance. 
        • Watch for kernel driver loads: tm_filter.sys and tmfsdrv2.sys loading on personal machines should trigger high-severity alerts. 
        • Block MSI execution from browser download directories. Both variants rely on the user running an MSI from their Downloads folder. Application control policies that prevent MSI execution from user-writable paths would stop this attack chain. 
        • Educate employees: Never update applications by clicking links in messages. Use the application’s built-in update mechanism or navigate to the vendor’s official website manually. 
        • Deploy browser policies that warn on or block automatic file downloads from unrecognized domains. 
        Removal 

        To uninstall the agent, run the following command as Administrator: msiexec /x {4600BEDB-F484-411C-9861-1B4DD6070A23} /qb. This removes the services, kernel drivers, and most installed files. However, our testing confirmed the uninstaller fails to fully delete the ProgramData directory due to runtime-generated files. After uninstalling, manually remove any remnants with rmdir /s /q "C:\ProgramData\{4CEC2908-5CE4-48F0-A717-8FC833D8017A}" and reboot to fully unload the kernel drivers from memory. 

        Conclusion 

        This campaign demonstrates the abuse of legitimate commercial monitoring software. The attackers did not write custom malware. Instead, they took an off-the-shelf monitoring product, leveraged its built-in stealth mode and filename-based configuration system, and wrapped it in social engineering designed to exploit trust in brands like Zoom and Google Meet.  

        The expansion to Google Meet, plus additional sandbox-sourced variants including an AdobeReader-branded filename, suggests this is an evolving operation that may expand to impersonate other applications. 

        Our hands-on detonation revealed details invisible to automated sandboxes: the CheckHosts C2 pre-flight gate, the rt.teramind.co router address, the second pmon service masquerading as Performance Monitor, the confirmed TMSTEALTH = 1 flag, and the full SOCKS5 proxy capability for C2 evasion. The fact that a single binary serves unlimited attacker accounts through nothing more than a filename rename makes this campaign easily scalable. 

        Acknowledgments 

        We would like to thank security researcher @JAMESWT_WT for promptly reporting the original malicious domain to Namecheap, leading to the takedown of uswebzoomus[.]com. 

        Teramind has stated that the company was not involved in this campaign. Because Teramind is a legitimate commercial product, it is not flagged by security software, meaning we have no visibility into whether this campaign has resulted in real-world infections. What we can confirm is that the infrastructure we documented—including purpose-built phishing domains impersonating Zoom and Google Meet, fake Microsoft Store pages, and a Teramind agent configured in stealth mode—is consistent with a campaign designed to deploy monitoring software onto targets’ machines without their knowledge or consent.

        Videos and screenshots Fake Zoom update clickthrough.
        • Fake Google Meet site.
        • Traffic to the fake Zoom site.
        • Google Meet interview traffic.
        • Namecheap takedown.
        • Before install. No Teramind service.
        • Before install. No Teramind folder.
        • After install.
        Categories: Malware Bytes

        How to understand and avoid Advanced Persistent Threats

        Malware Bytes Security - Thu, 02/26/2026 - 1:52pm

        By definition, an advanced persistent threat (APT) is a prolonged, targeted attack on a specific victim with the intention to compromise their system and gain information from or about that target.

        About a decade ago, the term was mostly used for state-sponsored threat actors. I used threat actors here, because in the state where they operated from and for, they are not seen as cybercriminals. That perception changes, of course, when you’re on the receiving end of such an attack.

        When these threats were first identified, their targets were governments and military organizations. Nowadays, the target can be any person, organization or business. We commonly see attacks on healthcare, telecoms, finance, MSPs, SaaS platforms, and supply chain providers.

        “APT” is often used as a dramatic label for any serious breach, even if it was short‑lived or opportunistic. So, let’s break down the name to see what really qualifies as an APT.

        Advanced

        Advanced does not necessarily mean Hollywood‑level hacking, but it does mean the attackers are deliberate and well prepared. They often combine several techniques: buying or discovering new, unknown software flaws (so‑called zero‑day vulnerabilities), abusing old but unpatched bugs, and crafting very convincing phishing emails that look like genuine messages from colleagues or partners. They may also use legitimate admin tools already present in the network, which makes their activity harder to spot because it looks like normal IT work, so-called LOLbins (Living Off the Land Binaries).

        In practice, “advanced” is less about using the fanciest tool and more about choosing the right mix of tools and tactics for a specific victim. An APT group might spend weeks studying a target’s people, systems, and suppliers and then analyze those data with help of an AI. That way, when they finally make a move, it has the highest chance of working on the first try.

        Persistent

        Persistence is what makes APTs so dangerous. These attackers don’t care about a quick hit‑and‑run raid. They want to break in, stay inside, and keep coming back for as long as access is useful to them. If defenders discover their activity and kick them out of one system, they may use another back door they prepared earlier, or will simply regroup and look for a new way in.

        Being persistent also means they move slowly and quietly. Attackers may spend months exploring the network, creating multiple hidden entry points, and regularly checking back in to see what new data has appeared that is worth stealing. From the defender’s point of view, this turns the incident from a single event into an ongoing campaign. You have to assume the attackers will try again, even after you think you have removed them.

        Threat

        The word threat doesn’t imply that only one kind of malware is involved. An APT usually includes several types of attacks. It refers to the whole operation: the people, their tools, and their infrastructure, not just one piece of malware.

        An APT may involve phishing, exploiting vulnerabilities, installing remote access tools, and stealing or abusing passwords. Together, these activities form the threat to the organization’s systems and data.

        Behind the threat is a team with a goal (for example, stealing sensitive designs, spying on communications, or preparing for future disruption), and with the patience and resources to keep pushing until they reach that goal.

        How to stay safe

        To avoid falling victim to an APT, assume you could be up against a formidable opponent.

        • Be cautious with unexpected emails, messages and attachments, not just at work.
        • Use passkeys where possible and strong, unique passwords where not, and a password manager.
        • Turn on multi‑factor authentication (MFA) wherever possible.
        • Keep your software and hardware updated, especially public-facing network equipment.
        • Use an up-to-date, real-time anti-malware solution, preferably with a web protection component.
        • Take note of anything out‑of‑the‑ordinary activity and report it, as even small details can turn out to be important later.

        We don’t just report on threats—we remove them

        Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

        Categories: Malware Bytes

        The Conduent breach; from 10 million to 25 million (and counting)

        Malware Bytes Security - Thu, 02/26/2026 - 6:16am

        The Conduent breach has quietly grown into one of the biggest third‑party data incidents in US history, and the real story now is how many different programs and employers are swept up in it, even for people who have never heard of Conduent.

        When we first covered this incident, public filings suggested roughly 10.5 million affected individuals, heavily concentrated in Oregon and a few other states. Fresh state notifications reportedly put the total at more than 25 million people across the US, with Texas alone jumping from an early estimate of about 4 million to 15.4 million residents impacted, and Oregon holding at around 10.5 million.

        That makes this one of the largest healthcare‑related breaches on record, with attackers reportedly spending about three months in Conduent’s environment and exfiltrating around 8 TB of data.

        How are so many people affected who have never heard of Conduent?

        In 2019, Conduent said its systems supported services for more than 100 million people nationwide and served a majority of Fortune 100 companies plus more than 500 government entities. That shows just how broad the potential blast radius is, even if not all of those records were touched in this incident.

        Conduent sits behind the scenes of a major portion of US public services and corporate back‑office work, which explains why the victim list looks so disconnected. Its platforms handle:

        • State benefit programs such as Medicaid, SNAP (Supplemental Nutrition Assistance Program), and other government payment disbursements in more than 30 states.
        • Mailroom, printing, and payment processing for state benefit offices and healthcare programs, including large health insurers like Blue Cross Blue Shield plans.
        • Corporate services for major employers, including at least one large automotive manufacturer; nearly 17,000 Volvo Group employees are confirmed among those whose data was exposed.
        Who stole what?

        The cyberattack was later claimed by the SafePay ransomware gang.

        Image courtesy of Comparitech

        The stolen data goes far beyond contact details. Notification letters and regulator filings describe:

        • Full legal names, postal addresses, and dates of birth.
        • Social Security numbers and other government identifiers.
        • Medical information, health insurance details, and related claims data.

        Because Conduent processes benefits and HR data on behalf of agencies and employers, most people affected never interacted with Conduent directly and may not even recognize the name on the envelope. If you received SNAP benefits, Medicaid coverage, other state‑administered healthcare, or worked for an organization that outsources HR or claims administration to Conduent (or one of its clients), your data may have flowed through its systems even though your “customer relationship” was with a state agency, insurer, or employer.

        Why this is worse than it first looked

        There are three reasons why this follow‑up story is more serious than the original:

        • More people are involved: The raw numbers climbed from 10 million to 25 million as more states and corporate clients disclosed involvement, showing how opaque third‑party breaches can be at the start.
        • Forever identifiers: SSNs plus medical and insurance data enable long‑tail identity theft, medical fraud, and highly targeted phishing that can haunt victims for years.
        • Third-party blind spot: For many covered entities, “the breach” will never show up in their own logs because the compromise happened in a vendor’s environment they rely on but do not control.

        So when an unexpected letter from Conduent arrives, it’s not a mistake. It’s a reminder that your data can be put at risk far away from the organizations you thought you were dealing with—and that the real exposure from this breach extends well beyond the numbers in any single state filing.

        Conduent breach notification letter

        Depending on which of your data was compromised, you may receive a slightly different letter. If you receive one, you could read our guide on what to do after a data breach to understand your next steps.

        We don’t just report on threats—we help safeguard your entire digital identity

        Cybersecurity risks should never spread beyond a headline. Protect your, and your family’s, personal information by using identity protection.


        Categories: Malware Bytes

        Instagram flagged explicit messages to minors in 2018. Image-blurring arrived six years later

        Malware Bytes Security - Thu, 02/26/2026 - 5:34am

        Meta took six years to blur explicit images on Instagram, even though internal emails show executives were aware in 2018 that minors were receiving them, according to newly unsealed court documents.

        In a deposition given last year, Adam Mosseri (now the head of Instagram) discusses an email thread with Guy Rosen, Meta’s VP and chief information security officer at the time. Rosen explained in the thread that adults could find and message minors on the platform. The messages could contain what Rosen called:

        “tier 2 sexual harassment, like dudes sending dick pics to everyone”

        up to…

        “tier 1 cases where they end up doing horrible damage.”

        The tool Meta now uses to address the problem is a client-side classifier that automatically blurs explicit images sent to teens in direct messages. But it wasn’t rolled out until roughly six years after that email exchange, in September 2024.

        The deposition was unsealed last week and filed on February 20, 2026, in MDL No. 3047 (Case No. 4:22-md-03047-YGR), a multidistrict litigation case in Northern California in which hundreds of families allege that platforms including Instagram were designed to maximize screen time at the expense of young users’ well-being. The filing is available through the court’s PACER docket.

        Internal records reveal teen safety concerns at Meta

        The filing also surfaces internal survey data that Instagram had kept confidential. Nearly one in five respondents aged 13 to 15 reported encountering unwanted nudity or sexual imagery on the platform. A further 8.4% of them said they had seen someone harm themselves or threaten to do so on Instagram within the past week.

        Instagram’s own Transparency Center didn’t disclose this at the time. Its child-endangerment section stated simply that the company was still working on the numbers. Mosseri also confirmed he had never publicly shared an internal estimate of around 200,000 daily child users experiencing inappropriate interactions, a figure referenced during questioning.

        His defence, and Meta’s, rests on the claim that the company was not idle during those six years. Mosseri told the court that other protections were introduced in the interim, including restrictions on adults messaging teens they are not connected to, and systems designed to flag potentially risky accounts.

        He pushed back on the idea that parents should have been explicitly warned about unmonitored direct messages, arguing that the risk exists on many messaging platforms. Meta spokesperson Liza Crenshaw pointed to Teen Accounts and parental controls, saying the company has been working on the problem for years.

        Other allegations against Meta

        The nudity filter is not the only safety measure under scrutiny. Court filings in related proceedings allege Meta explored making teen accounts private by default as early as 2019, then dropped the idea over concerns it would damage engagement metrics. That default-private switch did not arrive until September 2024.

        Whistleblower Arturo Béjar, a former Meta engineering director, told the US Senate in 2023 that he had raised teen safety concerns directly with Mosseri and other executives. He acknowledged that the company researched these harms extensively, but questioned whether it acted with sufficient urgency.

        An independent audit published in September 2025 found that of 47 teen safety features Instagram publicly promoted, fewer than one in five functioned as described, according to the report’s findings.

        Mosseri’s 2023 performance self-review, entered as an exhibit in the case, celebrated revenue at all-time highs and boasted about delivering results despite cutting his team by 13%. Teen well-being did not appear as a criteria in that review. He explained that well-being sat with a centralized Meta team, outside his direct remit.

        In a courtroom asking whether Instagram’s leadership prioritised growth over safety, that distinction may not land the way he hopes.

        We don’t just report on threats – we help protect your social media

        Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

        Categories: Malware Bytes

        Pages