CRXFILTRATE:
An Undocumented
JavaScript Execution
Backdoor in
a Chrome Extension
Network.
Reverse-engineered payload, mapped infrastructure, and deployable detection signatures. A coordinated browser extension cluster bypasses Manifest V3's remote-code prohibition by stripping page defenses and executing operator-delivered JavaScript in the page context.
extension IDs
cluster domains
maximum observed size
in 265-extension corpus
still in production
One extension. A coordinated cluster. A production JavaScript execution channel.
Phoenix Invicta is the public name Wladimir Palant gave to a coordinated browser extension adware cluster in his January 2025 analysis. We use the name as a technical cluster label, consistent with that prior public attribution. Palant documented 14 of the cluster's extensions and reverse-engineered the core CSP-stripping mechanism. Sixteen months later, our research extends that work in five directions.
First, the cluster is larger and more actively maintained than prior reporting documented. We mapped roughly 60 active and reserved domains, identified a 13-domain config and staging cluster, obtained source-level evidence from three samples, and identified extensions Palant did not catalogue.
Second, we documented the static fingerprint that ties the factory together. The themed fake-header CSP-stripping pattern appears in confirmed cluster extensions and produced no false positives in the 265-extension corpus scanned for the paper. The corpus is biased toward adjacent categories, so defenders should re-test locally before treating it as globally exhaustive.
Third, we obtained the JavaScript execution backdoor at source level. The Gen 1 framework posts the visited hostname, install UUID, extension ID, and country tag to statsdata[.]online/alk/g2.php, then injects the server response as JavaScript into the page's own realm. The paper also documents the rotated backup at secdomcheck[.]online.
Fourth, we documented the C2 round-trip mechanism. Exfiltrated data is carried in the request URL, payload code is delivered in the response body, and the round-trip happens in a single HTTP transaction. Proxies that block the response have already let the request through. The block prevents monetization. It does not prevent surveillance.
Fifth, we obtained and reverse-engineered the live production payload (m3011.js, currently served from fivestat[.]com). The first-generation payload that Palant analyzed (redirect_checker.js) has been replaced. The new payload is a Webpack-bundled production build with module-aware architecture, native support for Google, Bing, and Yahoo, computed CSS cloning that reads styles off the live SERP at runtime, server-side per-victim customization, and a distributed reconnaissance endpoint that uses infected browsers to discover new SERP ad layouts. The cluster has rotated revenue from Google AdSense for Search to Yahoo Search Partner.
This is not a fire-and-forget malware drop. It is a software product with a roadmap, a versioned release pipeline, and a development team that monitors upstream dependencies.
On Attribution
Wladimir Palant publicly attributed this cluster to a corporate entity named Phoenix Invicta Inc. in January 2025, based on Chrome Web Store publisher attribution, shared infrastructure, and corporate registration records. We adopt the cluster name throughout this paper to preserve continuity with his published research, and we credit his attribution where it appears. We do not, in this paper, make independent legal claims about the corporation, the individuals operating it, or every downstream monetization endpoint. References to "the cluster," "the operator," or "the development team" are descriptions of the technical operation, not legal claims about specific persons or corporations.
One shade of blue to a coordinated cluster.
The starting point was mundane: I wanted a very specific shade of blue and was about to download a color-dropper extension to sample it. Before installing a new extension into a real browser profile, we looked at the listing, permissions, and public threat-intelligence footprint. One sample, MyColorPick, had no public threat-intelligence record and declared permissions that did not fit a simple color picker.
That is what made the story plausible and dangerous. A color picker is exactly the kind of tool a designer, marketer, engineer, or analyst might install for a one-minute task and then forget. After installation in an isolated test VM, every page the browser loaded carried an injected JavaScript file labeled redirect_checker.js in Chrome DevTools. The file was not served by the visited site. It was injected by the extension into the page context.
Investigation timeline: from one anomalous color picker to a named cluster.
A normal color-dropper use case led us to review MyColorPick before trusting it in a real browser profile.
The listing had no public threat-intelligence footprint, while the manifest requested broad page access for a simple color-picker function.
redirect_checker.js appears in DevTools on every page, even though the visited site never served that file.
The Gen 1 framework reports visited hostnames, install ID, extension ID, and country tag to statsdata[.]online.
The server response is appended as a script element and executes in the visited page's own JavaScript realm.
Palant's January 2025 research surfaces. The mechanism is public. The operation is still running, larger than before.
The extension network and the distribution pipeline.
Across the broader cluster, we count 22 confirmed extensions, one lower-confidence ledger-only entry, and roughly 60 active or reserved domains. The cluster operates through many disposable developer accounts rather than under a single publisher name. It makes takedown of any one account incomplete, and lets new extensions appear faster than the Chrome Web Store can investigate them.
The extension inventory
At paper validation, three confirmed cluster extensions were still live or installable: Easy Dark Mode on Chrome Web Store as an unlisted listing, plus two Microsoft Edge Add-ons listings. Most of the rest had been removed or delisted, but removal from a store is not the same as removal from endpoints. Pre-takedown installs still need browser-policy enforcement.
| Extension | Extension ID | Users (last obs.) | Status at validation | Config Domain |
|---|---|---|---|---|
| Easy Dark Mode | ibbkokjdcfjakihkpihlffljabiepdag | 869 | Unlisted but live | easy-dark-mode.online |
| 1-Click Color Picker: Instant Eyedropper | bkknccgnmpcnhppklomdjkphccmpblga | not listed | Active on Edge | 1-click-cp.com |
| AdBlock for Youtube: SkipAds | jbdegnmcajkhjemebonejojlgkgcddhc | 11,142 | Active on Edge | · |
| ScreenCapX | ihfedmikeegmkebekpjflhnlmfbafbfe | 20,000 | Delisted | screencapx.co |
| MyColorPick | jckoejjnaljgkmgblmbodoegoefofhee | 10,000 | Removed | not identified |
| 1-Click Color Picker | fmpgmcidlaojgncjlhjkhfbjchafcfoe | 10,000 | Removed | · |
| Better Color Picker | gpibachbddnihfkbjcfggbejjgjdijeb | 10,000 | Removed | · |
| ColorPickPro | aplhgigkopkholapijailboandapfaim | 10,000 | Removed | · |
| Volume Booster | ojkoofedgcdebdnajjeodlooojdphnlj | 8,000 | Removed | super-sound-booster.info |
| AdBlock for YouTube: Skip-n-Watch | coebfgijooginjcfgmmgiibomdcjnomi | 3,000 | Removed | skip-n-watch.info |
| Font Expert | pjlheckmodimboibhpdcgkpkbpjfhooe | 666 | Delisted | font-expert.pro |
| AdBlock: Ads and YouTube | nonajfcfdpeheinkafjiefpdhfalffof | 641 | Removed | adblock-ads-and-yt.pro |
| Manual Finder 2024 | ocbfgbpocngolfigkhfehckgeihdhgll | 280 | Removed | · |
| Manuals Viewer | ieihbaicbgpebhkfebnfkdhkpdemljfb | 101 | Removed | manuals-viewer.info |
| SkipAds Plus | emnhnjiiloghpnekjifmoimflkdmjhgp | 95 | Removed | skipadsplus.online |
| Capture It | lkalpedlpidbenfnnldoboegepndcddk | 48 | Delisted | capture-it.online |
| Click & Pick | acbcnnccgmpbkoeblinmoadogmmgodoo | 20 | Removed | · |
| Dopni: Automatic Cashback | ekafoahfmdgaeefeeneiijbehnbocbij | 19 | Removed | · |
| SimpleSnap (via KeepAware research) | nbljjljaoanknannhlonmaknhckcoldi | not listed | Delisted | s8.traffktrackr.com |
| SnipCapture | jlpchojjamcikhgmedobmfodcefjmccn | not listed | Removed | 8melo.fun |
| RecItEasy | pnhkolkelkfnfphohbdnboedhejlfbho | not listed | Status not publicly verified | hjk-9l.cloud |
| ExtraSound | mkoegjeakpnbjklhimnimkgokbifeaoh | 10,000+ | Removed | · |
| Ledger-only entry | mmjhombiehngfpipefodkebphfnblphe | not listed | Removed, lower confidence | · |
The distribution site
The reverse IP results on the per-extension config server (79.141.164.251) included one domain that is not an extension config endpoint: best-browser-extensions[.]com. This is the cluster's marketing funnel, a landing page designed to drive Chrome Web Store installs.
| Tracker | Handling | Purpose |
|---|---|---|
| Session replay | Specific identifiers withheld unless listed in the scoped IOC reference | Records visitor interaction with the install funnel |
| Google Analytics | Use the paper-aligned analytics IDs below | Standard page analytics |
| gtag events | add_to_chrome_click, ext_install_gtm | Tracks CWS install funnel conversion |
The install funnel uses analytics and session-replay style instrumentation to measure how visitors interact with the page, where they hesitate, and what drives them to click "Add to Chrome." This is conversion rate optimization applied to malware distribution. The ext_install_gtm event name reveals the funnel endpoint: the page detects when a user completes the Chrome Web Store installation flow and fires a tracking event. The operator knows their install conversion rate with the same precision any legitimate SaaS company tracks theirs.
This is an actively-maintained operation, not a drive-by drop.
Three pieces of evidence point to professional engineering. First, the deployed JavaScript checks for two generations of Google's internal ad CSS class names: human-readable (.styleable-title) and the newer obfuscated form (.si27, .si28, .si29). Neither set appears in any public Google documentation. Someone is monitoring Google's ad rendering pipeline for class name changes. Second, the distribution infrastructure uses analytics and install-funnel event tracking. They optimize their conversion rate. Third, the production payload reverse-engineering reveals a Webpack build with versioned release directories (mva_v0910, nva_v3003, nva_v3004, nva_v0302, nva_v0303 all captured live), a feature-flag system with server-side toggles, and a distributed-reconnaissance endpoint. The m3011.js payload itself carries source-code comments referencing internal tickets: "fix MM-390; added mb to first element in searchCenterBA", "fix MM-394 MM-399 limit compList items", and "fix MM-397; hide title that leads directly to yahoo". The MM- prefix and ticket numbers above 390 indicate a project tracker with at least 399 logged tickets behind it. This is what an actively-maintained software product looks like.
A second delivery path also reaches victims of this cluster: third-party trackers and ad-chain sub-resources on legitimate websites can pull cluster-controlled JavaScript into a visited page without any extension installed on the host. Privilege is lower than the extension path, scope is bounded by the trigger origin, and nothing on disk is there to find. Both paths share the same IOC set in §16, and an extension-presence audit that returns zero hits does not prove the fleet is clean; run the DNS and proxy-log sweep alongside it (§15).
How the attack works.
Each extension is built around four mechanical capabilities. Each capability has plausible deniability. None of the four, individually, is a clear policy violation. Together, they constitute a JavaScript execution platform.
Universal host permissions
The manifest requests <all_urls> host permission. The extension can read and modify any page the user visits. Justified by the stated function (a color picker has to read pixels off any page) but the same permission opens injection on every banking, SSO, and admin console page in the session.
CSP header stripping
The extension uses declarativeNetRequest rules to strip Content-Security-Policy and X-Frame-Options headers from every HTTP response. CSP is the browser's primary defense against injected JavaScript. With it stripped, anything the extension chooses to inject runs without restriction.
Configuration server polling
The background worker polls the cluster's configuration server on a regular interval (every five to six hours in the samples observed). The C2 response carries operator-controlled state. The extension's content script then injects a <script> element into the visited page's DOM and fetches JavaScript from the cluster's C2 endpoint, which executes in the page's own realm. Manifest V3's remote-code prohibition restricts what runs in the extension context. It does not restrict a dynamically-created <script> tag inside a page, especially once CSP has been stripped.
Page-context script injection
The injected JavaScript runs in the page context, not the extension context. It can read the page's DOM, intercept form submissions, scrape session contents, and use the user's authenticated identity for whatever it chooses. The payload runs with the page's privileges, not the extension's.
Each capability is defensible on its own. A color picker needs to read pages. Performance extensions strip headers for legitimate reasons. Configuration updates are standard. Script injection happens in legitimate extensions every day. The cluster's innovation is composing all four into a remote JavaScript execution pipeline that Manifest V3 was specifically designed to prevent.
Not theoretical. Present in source.
For most extensions in the cluster, the demonstrated payload at the time of analysis is ad fraud. That payload is not interesting on its own. Ad fraud campaigns are common, lightly enforced, and economically motivated. What is interesting is that one of the cluster's domains, statsdata[.]online, is functionally a remote command server.
The extensions report every domain the user visits to that server. The server's response is JavaScript that gets executed in the context of whatever page the user is currently on.
There is no technical limitation on what that JavaScript can do, beyond the same-origin restrictions of the page it lands in. On a banking page, it can read and exfiltrate session contents. On an SSO page, it can capture credentials at the moment of submission. On an internal admin console, it can inject UI to capture authentication tokens.
The backdoor is confirmed cluster infrastructure
statsdata[.]online is the JavaScript execution backdoor documented in the captured framework. shurkul[.]online is documented by Wladimir Palant as a known cluster script-delivery server. Four independent infrastructure fingerprints tie them to the same operation: (1) both resolve to 5.149.255.43 on HZ-Hosting in Plovdiv, Bulgaria; (2) both are registered through Hostinger Operations UAB; (3) both use the same nameserver pair ns1-2.dns-parking.com; (4) both sit on the .online TLD. Seven additional operator-controlled domains co-resolve on the same IP (topodat[.]info, triplestat[.]online, datvault[.]cloud, fivestat[.]com, sevendata[.]fun, marsdata[.]online, gadstat[.]com), all sharing the same registrar and nameserver pattern. The shurkul script-delivery path also rotates over time (/v1712/g1001.js to /v1713/g1001.js), and during our research window the backup execution endpoint secdomcheck[.]online rotated from 5.149.255.43 to 93.123.17.252. This is an active-maintenance pattern, not stale co-location.
The paper documents statsdata[.]online as part of the JavaScript execution backdoor protocol and notes that both older and newer infrastructure had limited curated threat-intelligence coverage relative to its operational significance.
The mechanism is source-level, not hypothetical
The captured Gen 1 framework posts a base64-encoded JSON body to statsdata[.]online/alk/g2.php. The body includes a per-install identifier, the extension ID, the current hostname, and a server-controlled country tag. The response is then appended to the page as a <script> node.
| Protocol element | What it carries | Why it matters |
|---|---|---|
u | Per-install identifier | Links activity back to one browser installation |
e | Extension ID | Identifies which cataloged extension is calling home |
d | Visited hostname | Reports where the user is browsing |
c | Country tag | Enables server-side targeting logic |
| Response body | Operator-controlled JavaScript | Executed in the visited page's realm |
The paper deliberately separates capability from observed payload use. The demonstrated payload family is ad injection and fake-SERP manipulation, but the JavaScript execution channel is not technically limited to ad fraud. It can deliver arbitrary JavaScript selected by the server for the page the user is visiting.
The most important finding for defenders.
The lottingem[.]com/re.php endpoint serves a dual purpose that was not apparent from initial code analysis. The request URL carries exfiltrated data as query parameters. The response body delivers the malware payload. Data exfiltration and code delivery happen in a single HTTP round trip.
The URL itself carries the surveillance
Visited domain, full page title, install UUID, Chrome extension ID. The proxy logs the request URL before it can evaluate the response.
The body is the malware
Complete injection framework. Same connection. Same trip. By the time response inspection completes, the request has already crossed the wire.
Enterprise proxies that block lottingem[.]com based on response categorization prevent the visible ad injection. They do nothing about the data exfiltration. The DNS query resolved. The TCP connection established. The TLS handshake completed. The HTTP request with the full URL was transmitted before the proxy's block took effect. Blocking the response prevents monetization. Blocking the response does not prevent surveillance.
The block you have is not the block you think you have.
Most enterprise proxies enforce policy on response evaluation: the response body comes back, the proxy categorizes the destination or inspects content, and the response is blocked or allowed. By that point, the request URL has already crossed the wire. For C2 channels that carry exfiltrated data in the request URL itself, response-blocking provides zero protection against data theft. The fix is request-URL-aware blocking: matching the destination domain at DNS or pre-connection layer rather than at response evaluation. DNS-level blocking, request-URL filtering, and pre-flight categorization all stop the exfiltration. Response-body inspection does not. If you blocked lottingem[.]com and got a green status indicator from your proxy console, the data was already gone.
What the page titles contain
The t= parameter carries the complete page title of every page viewed in the infected browser, transmitted as a URL query string to the operator's C2 before any response is evaluated. Page titles routinely contain project names, environment identifiers (staging, pre-production, development instances), internal IP addresses, employee names, document titles, ticket numbers, and system identifiers that domain-level logging alone would never expose. This is not browsing history. It is content-level surveillance of enterprise activity.
A complete rewrite.
The first-generation payload (redirect_checker.js, 44 KB) is what Wladimir Palant could not obtain in his original analysis. We obtained it through sandbox capture of the C2 round-trip. But it is no longer the production payload. The cluster has shipped a complete rewrite.
redirect_checker.js
- Size
- 44 KB, 1,463 lines
- Architecture
- Global namespace (
window.zMainObj) - Build tooling
- None
- Code quality
- Dead code,
return;stubs, commented blocks - Module system
- None
- Error handling
- Minimal try/catch
- Search engines
- Google only (stub support for Yahoo/Tfl/Smv)
- Ad source
- Third-party networks (Epom, gulkayak.com)
- C2 domains
- statsdata.online, gulkayak.com, doubleview.online, rumorpix.com, topodat.info, aj2472.online, astato.online
m3011.js
- Size
- 111–135 KB depending on version, Webpack single-line bundle
- Architecture
- Webpack IIFE with module system
- Build tooling
- Webpack (strict mode, property enumeration)
- Code quality
- Clean production build, no dead code
- Module system
t.d(),t.o()webpack module exports- Error handling
- Comprehensive: error logging, iframe error tracking with localStorage TTL, global
window.onerror - Search engines
- Google + Bing + Yahoo · full native support
- Ad source
- Yahoo Search Partner ads directly
- C2 domains
- datvault.cloud, astralink.click
Zero overlap. Not a single function name, variable name, CSS selector, domain, or code pattern appears in both files. These two payloads are complementary, not sequential. The extension creates the execution environment (CSP stripping, script injection, iframe scaffolding). m3011.js is the production payload that runs inside that environment, performing the actual SERP hijacking. The extension is the loader. The server delivers the weapon.
Server-side per-victim customization
m3011.js as delivered is not a static file. The server replaces template variables at delivery time, producing a unique payload for every infected browser.
| Template Variable | Replacement |
|---|---|
| %M_FRAMEURL_YA% | Per-victim Yahoo iframe URL |
| %M_UNIID_YA% | Per-extension-install UUID |
| %M_EXTID_YA% | Chrome extension identifier |
| %M_COUNTRY_CODE_YA% | Country code for ad targeting |
| %M_LOG_NEW_ADS_BLOCKS% | Feature flag · toggles ad discovery logging |
Every endpoint gets a payload with its own unique identifiers baked into the JavaScript. Feature flags can be toggled server-side for individual installations. The security implication is direct: hash-based detection is useless. No two endpoints receive the same file. SHA256 signatures, YARA rules matching fixed strings, and IOC feeds based on file hashes will never match because every delivered copy is unique. Detection must target behavioral patterns rather than static file signatures.
Active intelligence gathering
The datvault[.]cloud/logb.php endpoint is not a telemetry collector. It is a reconnaissance platform. When the %M_LOG_NEW_ADS_BLOCKS% feature flag is enabled, m3011.js scans the SERP DOM for ad elements it does not already know how to handle, captures the ad ID, 10 levels of DOM ancestor HTML, the parent element's complete HTML, the user's search query, the iframe URL, and the install identifiers, and sends them as raw JSON to logb.php.
The cluster uses infected browsers as a distributed reconnaissance network. When Google or Bing changes their SERP layout, the logb.php endpoint captures the new structure from real user sessions. The operator can then update m3011.js to handle the new layout before it breaks their injection. This is the malware equivalent of a continuous integration pipeline.
Computed CSS cloning
The most sophisticated technique in m3011.js is the visual disguise system. Rather than applying a fixed CSS stylesheet to injected ads, the payload reads the computed styles from real SERP elements at runtime and applies them to the injected content. On Google, it reads font family, font size, and color from #rso span a[ping] (organic result links), #rso span.VuuXrf (site names), and #rso cite span (sub-site URLs). On Bing, it reads from #b_results li.b_algo h2 a, #b_results li.b_algo p, and other selectors.
The injected ads inherit the exact computed styles of the real results they are replacing. If Google changes their font stack, the injected ads change with it. If Bing updates their link color in dark mode, the injection adapts automatically. The payload also detects dark mode by reading the page background color and applies a complete set of color tokens for each mode. Injected content in a dark-mode Bing session looks like native dark-mode Bing results.
From Google AdSense to Yahoo Search Partner.
The first-generation payload exploited Google's AdSense for Search program. As documented in Palant's analysis, the original ad fraud scheme hijacked search traffic to fake SERPs that embedded Google Custom Search Engine widgets. Google served real ads through real auctions, and AdSense for Search paid the publisher account holder the revenue share. Google was simultaneously the victim, the payment processor, and the unwitting revenue source.
The new production payload (m3011.js) has rotated revenue from Google AdSense for Search to Yahoo Search Partner. The first-generation Google AdSense path remains live in parallel for legacy extensions, so both monetization stacks operate simultaneously. How the new model works:
- The user searches on Google or Bing
m3011.jsextracts the search query from the SERP URL- A hidden iframe loads a Yahoo Search results page for the same query
- Native Google or Bing ads are hidden via injected CSS (
#tads,.b_ad,.commercial-unit-desktop-top) - The cloned Yahoo Search Partner ads, styled with computed CSS from the live SERP, are overlaid in their place
- The user clicks what looks like a native Google or Bing ad and is sent through Yahoo's click-attribution chain
- Yahoo Search Partner pays the cluster operator the publisher revenue share
Yahoo joins Google as an unwitting revenue source for the new production payload, while the legacy Google AdSense path continues to fund the older extensions. The technique is the same: hijack search traffic, inject ads from a partner program, monetize the clicks. The architecture is constant across both.
Real names linked to detailed browsing.
One captured payload variant contains an active function that scrapes the signed-in Google user's real name and email address from the Chrome sign-out element and exfiltrates the data to doublestat[.]info. We also observed a variant where this block was removed, so treat identity harvesting as variant-gated rather than universally present.
When this variant is served, the C2 round-trip telemetry (every domain visited, every page title) can be linked to a real identity and indexed by a persistent install UUID. The mechanism is straightforward. The implications are not.
For a SOC analyst whose corporate Google account is signed into Chrome, the identity-harvesting variant can expose the analyst's name, email, every domain visited, every page title viewed, and persistent UUID-keyed history across sessions. That dataset is more valuable than the ad fraud revenue.
What defenders can actually hunt.
The paper's defender guidance separates high-confidence compromise signals from shared-service pivots. The strongest starting points are the cataloged extension IDs, actor-controlled domains, and the highly specific CDN benchmark path used by the production payload.
The fake-header CSP-stripping fingerprint is the paper's strongest code-pattern attribution signal. It was validated against a biased but relevant 265-extension corpus with no false positives observed. Treat that as strong evidence, not as a universal mathematical guarantee.
extension IDs
in full source form
reverse-engineered
hosting infrastructure
across the catalog (floor)
Active, persistent, and hard to see.
The paper's detection-gap analysis explains why standard controls often miss this class of activity:
- No native binary behavior: the malicious logic runs as JavaScript inside the browser extension and page contexts.
- No novel persistence primitive: persistence is the browser's own extension installation and update model.
- Network telemetry needs context: the most actionable signal is the extension, Host/SNI, URL path, and storage context together.
This is why the IOC reference emphasizes scoped controls instead of isolated string matches or blanket IP blocks.
The 500b-bench.jpg indicator
The paper's highest-signal CDN-path indicator is a connectivity check the production payload uses before initiating ad injection:
This is a 500-byte JPEG fetched as a lightweight probe to confirm the CDN is reachable. The ?t= parameter cache-busts each request. The filename reflects its purpose: a benchmark probe. Use the full hostname and path together; do not treat the shared CDN apex as actor-owned infrastructure.
Multi-extension exposure
The catalog shows a portfolio strategy: color pickers, screenshot tools, ad blockers, volume boosters, dark-mode utilities, and document tools. Defenders should not hunt for only one product name. Apply the full 23-ID reference set, while treating the single ledger-only entry as lower-confidence and requiring surrounding context.
Why default defense layers miss this.
The paper's detection-gap analysis explains why this campaign is structurally aligned against common endpoint, proxy, secure-web-gateway, and browser-store controls. The problem is not that no telemetry exists. The problem is that default telemetry models rarely connect browser-extension activity, URL-level exfiltration, and extension-scoped storage into one alertable story.
Behavioral + ML
Activity occurs entirely inside Chrome's JavaScript execution environment. No file is dropped. No process is spawned outside the browser. EDR is architected for process behavior, file changes, and network anomalies at the host level. Browser-internal injection using the browser's own network stack is invisible to that telemetry model.
Telemetry exists below the console
SuspiciousDnsRequest event for ahacdn[.]me existed but was not promoted to a visible alert.
The detection existed in the data. The detection pipeline did not promote it to visibility. A SOC analyst running the standard workflow would never see it. The signal was there. The pipeline was the failure.
Categorization databases can lag new actor infrastructure. A proxy can see the envelope and still miss the browser-extension behavior that made the request meaningful.
Shared infrastructure creates both false-negative and false-positive pressure. The paper recommends Host, SNI, path, and extension-context scoping.
DNS controls are useful for confirmed actor-owned hostnames, but they cannot safely generalize from shared CDN apexes or shared IP addresses.
This is a structural detection gap, not a tuning issue.
EDR is doing what it was designed to do. Proxy categorization is doing what it was designed to do. The cluster has built around them. The detection layers we rely on for office-network defense were architected for an attack model where malware lands on a host and the host does something observable. This cluster persists through the browser extension model and executes malicious logic inside browser JavaScript contexts. Closing this gap requires browser-resident security telemetry, extension-aware inventory, or infrastructure-level visibility into browser-originated network calls categorized at the extension level. Neither is standard equipment in most enterprise environments today.
The Featured badge is not a security control.
The Chrome Web Store and browser-extension stores are useful distribution controls, but the paper treats them as imperfect security boundaries rather than sufficient enterprise defenses.
The Chrome Web Store remains a distribution and trust surface, not a complete security boundary. The paper documents live or unlisted cluster extensions across Chrome Web Store and Microsoft Edge Add-ons, and notes that store removal does not guarantee that already installed extensions disappear from managed endpoints.
Per Google's own announcement of the Featured badge in April 2022:
"Chrome team members manually evaluate each extension before it receives the badge, paying special attention to: adherence to Chrome Web Store's best practices guidelines, including providing an enjoyable and intuitive experience, using the latest platform APIs and respecting the privacy of end-users."
The Featured badge is awarded after manual review by a Chrome team member. Developers cannot pay for it. The published evaluation criteria specifically include "respecting the privacy of end-users." For enterprise defenders, the practical implication is that "trust the Chrome Web Store" is not a load-bearing security control.
A separate question is whether enterprises can force-disable extensions from endpoints where they were already installed. The practical control is policy enforcement through ExtensionInstallBlocklist in Chrome Browser Cloud Management, Intune, or Group Policy. Removing a listing stops new installs; policy is still needed to disable existing copies and prevent user re-enablement.
The cluster's next wave is already registered.
Reverse IP lookups across the five HZ-Hosting IPs and the Hostinger Gen 2 (Cluster B2) subnet expanded the catalogued infrastructure to roughly 60 active and reserved domains. The Gen 2 cluster alone contributes 13 staged-but-undeployed domains alongside its two live C2 hosts; additional passive-DNS hits on the Gen 1 Cluster A IPs surface further reserved domains. Several are returning HTTP 200 (live and serving). Several are returning HTTP 403 (registered, pointed at cluster infrastructure, not yet activated).
The naming pattern of the reserved domains is suggestive of the cluster's roadmap. Each maps to a high-installation Chrome Web Store category that provides the same attack surface as a color picker: a simple stated function that justifies broad permissions and sustains long-term installation.
The cluster's next wave of extensions is being positioned now. The infrastructure is registered, pointed at the production C2 cluster, and waiting for the extensions themselves to ship. In parallel with the new categories, the operator continues to migrate CDN infrastructure to evade blocklisting. During our observation window, ahacdn[.]me moved from its original IP (88.208.5.12) to a new shared ad-CDN range (45.133.44.0/24) on the same provider. Domain-, Host-, SNI-, and path-scoped blocking survives that migration. IP-only blocking is brittle and risks collateral impact.
This is not an abandoned operation. Someone is watching the blocklists and responding.
Color Picker · Eyedropper.
The paper explicitly separates the "Color Picker · Eyedropper" extension (gogbiohkminacikoppmljeolgccpmlop, about 400,000 users) from this cluster.
The extension appears on MyColorPick's Chrome Web Store "related extensions" list, but that is not an ownership signal. Prior public reporting attributes it to a separate malicious operation. It uses different infrastructure (Cloudflare, not HZ-Hosting Bulgaria), a different technique (AES-GCM with a hardcoded key, not this cluster's base64 JSON pattern), and shares no code, no domains, and no infrastructure with this cluster.
This extension is operated by a different threat actor. Its appearance on MyColorPick's "related extensions" list is a recommendation-surface artifact, not an ownership signal.
The important point for defenders is attribution hygiene. Do not roll gogbiohkminacikoppmljeolgccpmlop into this cluster's IOC set. It is independently malicious and worth tracking, but it belongs in a separate bucket.
Three immediate, tractable actions.
1. Run the IOC sweep first
Use the IOC list to query DNS logs, network connection logs, browser extension inventory, and extension-scoped storage. DNS-tier matching is necessary, not optional: an extension-presence audit that returns clean does not rule out exposure via the page-served path noted in §02. The highest-signal CDN-path indicator in the paper is requests to cdn23602612[.]ahacdn[.]me/500b-bench.jpg, especially when paired with a cataloged extension ID or actor-owned Host/SNI value.
For 45.133.44.0/24, prefer scoped controls on the documented cluster hostnames and request paths. Treat subnet-wide firewall blocks as an environment-specific decision only if your team can absorb legitimate ad-tech collateral. Any non-browser process contacting these IOCs is a different threat that warrants its own investigation.
2. Inventory browser extensions across the fleet
Use Chrome and Edge enterprise policy controls to restrict extension installation to an approved list. The cluster extension IDs in the IOC section below can be added to a blocklist immediately. Any extension requesting <all_urls> host permission warrants explicit review against its actual stated functionality, and any extension requesting nativeMessaging alongside <all_urls> should be treated as high risk by default.
3. Treat browser-context attacks as a distinct detection problem
The detection gap exposed by this campaign is structural. Closing it requires acknowledging that the standard EDR + SSL proxy + DNS stack is not architected to detect browser-internal attacks. Adding browser-resident security telemetry, infrastructure-level visibility into browser-originated network calls, or extension-aware threat hunting capability are paths forward. None of them is standard equipment.
Start with policy and scoped network controls.
Force-disable the cataloged extension IDs first, then apply network controls to confirmed actor domains, Host/SNI values, and URL paths. Treat shared CDN apexes, shared IPs, generic analytics IDs, and parked domains as scoped pivots rather than standalone blocklist entries.
Customers did not have to wait for publication.
As soon as the scoped IOC set was validated, 7AI ran targeted hunts for PLAID ELITE customers across browser-extension inventory, DNS, proxy, and network telemetry. Where the hunt surfaced exposure, the customer's AI Security Engineer could move directly into triage and response instead of waiting for a public write-up or a third-party feed update.
The IOC explorer.
Organized by infrastructure cluster and aligned to the paper's defender reference. Click any indicator to copy, or download the complete scoped IOC set below. The full paper includes two YARA rules, three Suricata signatures, forensic artifacts, and response guidance.
Download the scoped IOC set
Download the canonical IOC Blocklist PDF, the YARA rules, the Suricata rules, or the combined detection-rules Markdown. The TXT and Markdown buttons export the visible IOC rows from this page locally in your browser.
Cluster A · Core C2 · HZ-Hosting Bulgaria (AS59711)
Active C2 servers on HZ-Hosting infrastructure in Plovdiv, Bulgaria. Includes the JavaScript execution backdoor and production payload host.
Cluster B · Per-extension config server
All thirteen domains resolve to 79.141.164.251. Thirteen domains, one server.
Clusters B2 · C · D · E · Fake SERPs, CDN, new-gen C2
Operator-controlled fake SERP sites, the ad CDN, and the new-generation production payload C2 infrastructure.
Reserved domains · Roadmap indicators
Monitor for activation. Most are returning HTTP 403, pointed at cluster infrastructure but not yet serving. Names suggest the next wave of extension categories.
Extension IDs · browser inventory, force-remove
22 confirmed cluster extensions plus one lower-confidence ledger-only entry. The separate Annex Security color-picker finding is included only for attribution discipline and should not be counted in cluster metrics.
Analytics IDs, hashes, and high-value pivot points
Note on m3011.js: hash-based detection of the production payload is unreliable because the server applies per-victim template substitutions at delivery time. Behavioral indicators are more reliable than file hashes.
Severity guide
Expected process names on a hit include chrome.exe, msedge.exe, or another Chromium-based browser. Non-browser processes contacting these IOCs indicate a different threat.
Frequently asked questions.
The Phoenix Invicta cluster is the name Wladimir Palant publicly gave to a coordinated group of browser extensions that share infrastructure, code architecture, and a common Content-Security-Policy stripping technique. A subset of those extensions are designed to inject ads into web pages, strip CSP headers, and execute remotely-loaded JavaScript inside the browser context of every page their users visit. The cluster was first publicly documented by Palant in January 2025.
The JavaScript execution backdoor is present in the captured redirect_checker.js framework. The code posts the visited hostname, install UUID, extension ID, and country tag to statsdata.online/alk/g2.php, then appends the server response as a script element inside the visited page. The paper also documents the rotated backup at secdomcheck.online. The demonstrated payload family is ad injection and fake-SERP manipulation, but the execution channel can deliver arbitrary server-selected JavaScript.
The cluster's primary C2 endpoint (lottingem.com/re.php) uses a single HTTP request to do two things at once: the request URL carries exfiltrated data (visited domain, page title, install UUID, extension ID) as query parameters, and the response body delivers the payload. This means proxies that block the response have already let the request through. The data was exfiltrated by the time the block took effect. Blocking the response prevents monetization. Blocking the response does not prevent surveillance.
Not in its current shipped behavior, but the architecture supports it. The deployed JavaScript contains a function epom() that integrates with the Epom programmatic ad network. Inside it, a helper generateChanelTargeting() builds a nine-bucket demographic code by combining gender with an age band: 0013 (under 13), 1317 (13-17), 1824 (18-24), 2534 (25-34), 3544 (35-44), 4554 (45-54), 5564 (55-64), 6500 (65+), and 0000 (unknown). This is a generic demographic-targeting system, not an exclusively child-targeting feature. What is notable is the developer's choice to include a bucket spanning the COPPA-protected age range (under 13), which is a deliberate design decision rather than an accident. The function is currently short-circuited by a return; statement at the top, but it is not deleted, and activation requires only a server-side configuration change with no extension update or Chrome Web Store re-review. We do not have evidence this targeting is currently active in production.
One captured payload variant contains an active function that scrapes the signed-in Google user's real name and email address from the Chrome sign-out element and exfiltrates the data to doublestat.info. We also observed a variant where this block was removed, so identity harvesting should be treated as variant-gated rather than universally present. When served, that capability can link real identities to detailed browsing activity indexed by a persistent install UUID.
Manifest V3 prohibits extensions from loading and executing JavaScript from a remote source in the extension context. The cluster's extensions sidestep that restriction in two steps. First, a declarativeNetRequest ruleset strips Content-Security-Policy and X-Frame-Options headers from every page response. Second, the extension's content script injects a <script> element into the visited page's DOM, which then fetches operator-controlled JavaScript from the cluster's C2 (statsdata.online/alk/g2.php) and executes it in the page's own realm. Both bypasses target the page context, not the extension context, so the Manifest V3 prohibition does not apply.
The malicious activity occurs inside the browser's JavaScript execution environment. No file is written to disk by the payload. No process is spawned outside the browser. No registry key is created. The network activity originates from a normal Chrome process making normal-looking HTTPS requests. EDR platforms are architected to detect process behavior, file changes, and network anomalies at the host level. The paper also documents one raw SuspiciousDnsRequest event for ahacdn.me that existed at the sensor layer but was not promoted to a visible alert.
The first-generation payload (redirect_checker.js, 44KB, analyzed by Palant in his January 2025 research) is what extensions shipped with. We obtained it through C2 sandbox capture. It targets Google search and exploits Google AdSense for Search via fake SERPs.
The current production payload (m3011.js, 111–135 KB depending on version, served live from fivestat.com) is a complete rewrite. It is Webpack-bundled, supports Google + Bing + Yahoo natively, uses computed CSS cloning to make injected ads visually indistinguishable from native results, includes server-side per-victim customization, and uses a logb.php endpoint to capture new SERP ad layouts from infected browsers as a distributed reconnaissance system. It has rotated revenue from Google AdSense for Search to Yahoo Search Partner. The two payloads share zero function names, variable names, or domains. They are complementary: the extension creates the execution environment, the production payload performs the actual SERP hijacking inside it.
No. The architecture, full host permissions, CSP stripping, and server-controlled JavaScript injection into arbitrary page contexts support any JavaScript payload the operator chooses to deliver. The observed and reverse-engineered payload family monetizes through ads. The same execution channel could deliver credential harvesting, session token theft, or targeted attacks against authenticated banking, SSO, or admin console pages. The payload is the variable. The architecture is the constant.
Three actions: (1) Inventory deployed browser extensions across the fleet and apply enterprise policy controls to restrict installation to an approved list. The cluster extension IDs above can be blocklisted immediately. (2) Run the IOC sweep above against DNS logs, network logs, browser inventory, and extension-scoped storage. The cdn23602612.ahacdn.me/500b-bench.jpg URL pattern is the highest-signal CDN-path indicator in the paper when paired with actor context. For 45.133.44.0/24, prefer scoped controls unless your environment can absorb legitimate ad-tech collateral. (3) Treat extension permissions as a supply-chain trust decision, particularly any extension requesting <all_urls> host access or nativeMessaging.
Not on extension audit alone. The cluster also reaches victims through third-party tracker chains on legitimate websites, with no extension installed on the host. A clean extension audit does not rule out that exposure. Run the DNS and proxy-log IOC sweep against the same lookback window. If DNS shows hits but the extension hunt is clean, treat it as page-served exposure and remediate at the network tier rather than the endpoint.
Start with extension-aware controls, not only endpoint process telemetry. Inventory installed Chrome and Edge extensions, block the 23-ID reference set, search extension-scoped storage for the forensic artifacts in the paper, and apply network controls to confirmed actor domains, Host/SNI values, and URL paths. Treat shared CDN apexes, shared IPs, and generic analytics IDs as scoped pivots rather than standalone proof of compromise.
Walk it down.
Tap each item to mark it done. Progress is local to your browser session.
Detection & response checklist
- Block the cluster extension IDs at the browser policy layerAll organizations using managed Chrome / Edge
- Inventory all installed browser extensions across the fleetAll organizations
- Flag any installed extension requesting <all_urls> plus nativeMessaging for reviewAll organizations
- Run the DNS IOC sweep with 14-day lookbackAll organizations
- Run the network connection IOC sweep with 7-day lookbackAll organizations
- Search proxy logs for cdn23602612.ahacdn.me/500b-bench.jpg (highest-signal indicator)All organizations
- Triage CRITICAL hits (statsdata.online, lottingem.com, doublestat.info, fivestat.com, 5.149.255.43) with priorityAny environment with hits
- Evaluate scoped controls for 45.133.44.0/24 before any subnet-wide blockOrganizations that may have ad-tech collateral exposure
- Restrict Chrome Web Store extension installation to an approved listAll organizations
- Configure DNS-level or pre-connection blocking for C2 domains (response-blocking does not stop URL-parameter exfiltration)All organizations
- Establish browser-context detection capability beyond standard EDRAll organizations
- Walk affected users through extension removal and review browser sessions for credential exposureAny environment with hits
- Monitor the reserved domains for activation as a signal of next-wave deploymentThreat hunting teams
Store status is not the same as endpoint status.
The paper's status language is current as of verification on or before 2026-05-09. Browser-store listings, takedowns, and infrastructure can change quickly. The IOC and extension-ID reference remains useful for hunting because store removal does not automatically prove an endpoint is clean.
Live or installable listings at paper validation
ibbkokjdcfjakihkpihlffljabiepdag)
Unlisted but live
bkknccgnmpcnhppklomdjkphccmpblga)
Active on Edge
jbdegnmcajkhjemebonejojlgkgcddhc)
Active on Edge
The paper covers 22 confirmed cluster extensions plus one lower-confidence ledger-only entry. Removed listings still belong in the blocklist because pre-takedown installations can remain present unless policy disables them.
Re-verify store and infrastructure status before operational use. Treat the May 2026 paper as the evidence baseline, not as a real-time takedown tracker.
Companion materials.
For security teams who want the full technical depth, two companion documents accompany this research:
- The defender reference in the full paper with domain, IP, URL, analytics, and code IOCs, plus two YARA rules, three Suricata signatures, forensic artifacts, and response actions. → View the scoped IOC reference
- Wladimir Palant's January 2025 research, which named the cluster and reverse-engineered the core mechanism that this work builds on. → Read Palant's analysis
This research was conducted by the 7AI Threat Research team. Individual researcher names are withheld for operational-safety reasons. We extend public credit to Wladimir Palant, whose January 2025 analysis named the cluster and reverse-engineered the core mechanism this work builds on.
For questions about this research, contact security@7ai.com. To learn how 7AI approaches browser-context threats and other attacks that bypass traditional defenses, visit 7ai.com.