
Over 300,000 Americans unknowingly handed their private emails, passwords, and sensitive data to cybercriminals through fake AI browser extensions that infiltrated Google’s own Chrome Web Store—some even bearing Google’s “Featured” badge of approval.
Story Highlights
- 30 malicious Chrome extensions impersonating ChatGPT, Gemini, and other AI tools harvested data from up to 300,000 users
- Attackers used hidden iFrames to bypass Chrome Web Store security reviews, routing user inputs to criminal servers
- Stolen data includes Gmail credentials, passwords, API keys, and every AI prompt users entered
- Google removed extensions only after security researchers exposed the campaign in February 2026
- Similar attacks previously compromised over 10 million users, highlighting systemic platform vulnerabilities
Sophisticated Deception Exploits AI Hype
Security researchers at LayerX discovered 30 fraudulent Chrome extensions masquerading as legitimate AI assistants like ChatGPT, Gemini, Claude, and Grok.
These extensions accumulated between 260,000 and 300,000 installations through what cybersecurity experts call “extension spraying”—flooding the Chrome Web Store with near-identical malicious variants under different names to evade detection.
The attackers leveraged the explosive popularity of AI tools to trick users into granting broad permissions, including the ability to “read all web content.” Some extensions even received Google’s “Featured” designation, lending false credibility that lured unsuspecting conservatives and professionals seeking productivity enhancements.
Fake AI Chrome extensions with 300K users steal credentials, emails https://t.co/RVGmfaLGCE
— Lifeboat Foundation (@LifeboatHQ) February 20, 2026
Hidden iFrames Bypass Security Reviews
The attackers employed a technical scheme that exposed critical failures in Google’s review process. Rather than embedding malicious code directly into the extensions—which would trigger detection during Chrome Web Store screening—they loaded remote iFrames from attacker-controlled servers.
These iFrames acted as invisible intermediaries, proxying every user keystroke, email, and AI prompt to criminal infrastructure in real time. LayerX researchers identified shared backend systems connecting all 30 extensions through matching TLS certificates and JavaScript bundles.
This approach allowed attackers to modify malicious behavior dynamically without updating the extensions themselves, rendering Google’s static analysis methods useless. The technique specifically targeted Gmail users and AI chat interfaces, harvesting credentials and API keys worth substantial sums on criminal marketplaces.
Google’s Delayed Response Raises Accountability Questions
Google removed the first extension on February 6, 2025, but the coordinated campaign continued undetected for months until LayerX published its findings in early February 2026. Following media inquiries from Fox News and others, Google confirmed to reporters that “extensions from this report have all been removed” from the Chrome Web Store.
Yet the timeline reveals a troubling pattern: extensions accumulated hundreds of thousands of installs while some displayed Google’s own “Featured” badge, suggesting the tech giant’s vetting processes failed to protect users from an obvious threat.
Security analyst Zargarov from Paubox noted the review gaps unfairly shift the security burden onto everyday users who trust platform gatekeepers. This incident follows a disturbing precedent—prior attacks using similar tactics compromised 8.8 million users through DarkSpectre malware and stole data from over 900,000 users via fake AI extensions.
Users Face Lasting Consequences From Data Theft
The stolen information extends far beyond simple browsing history. Attackers captured Gmail credentials enabling access to personal correspondence, financial records, and confidential communications. API keys—used by developers and businesses to authenticate software services—provide criminals entry points into corporate systems and cloud platforms.
Every AI prompt entered by victims now resides on criminal servers, potentially exposing proprietary business strategies, legal discussions, medical inquiries, and private thoughts users assumed were secure.
Enterprises face additional risks when employees install such extensions on work devices without IT oversight, creating backdoors into corporate networks. The long-term economic impact includes identity theft, unauthorized account access, phishing campaigns using stolen email contacts, and potential blackmail leveraging sensitive captured conversations.
300,000 Chrome users hit by fake AI extensions https://t.co/fx1nnsI4O2
— Fox News AI (@FoxNewsAI) February 26, 2026
Platform Accountability and User Vigilance Essential
This breach underscores the tension between innovation and security in the unregulated AI extension marketplace. LayerX researchers describe these malicious extensions as “general-purpose access brokers” that fundamentally break the browser security model through remote code execution.
Google’s reactive approach—removing extensions only after public exposure rather than proactive detection—mirrors Big Tech’s frequent pattern of prioritizing growth over user protection. Conservatives rightly question whether platforms deserve liability shields when their negligent review processes enable mass data theft.
Users must immediately audit installed Chrome extensions, remove any AI assistants not from verified developers, and scrutinize permission requests that demand access to “all websites” or email content. The National Cyber Security Centre emphasizes implementing strict IT policies in workplaces to prevent unauthorized extension installations, recognizing individual vigilance alone cannot counter industrialized cybercrime exploiting platform vulnerabilities.
Sources:
300,000 Chrome users hit by fake AI extensions – Fox News
Fake AI browser extensions steal data from over 260K Chrome users – Paubox
Fake AI Chrome Extensions with 300K Users Steal Credentials, Emails – NetManageIT














