If you run a factory network, you already know the uncomfortable truth: uptime is king, and security often gets the scraps. MuddyWater (aka Seedworm/Static Kitten) just proved how fast that gap can get exploited. Symantec tracked them spending Feb 20–27, 2026 inside a major South Korean electronics manufacturer —using a playbook that doesn’t look flashy, just effective. This post breaks the chain down in plain language, so you can spot the same moves in your environment before someone else does.
The 7-Day Reality Check: How Espionage Unfolds in a Manufacturing Network
A lot of factory leaders hear “APT” and picture some Hollywood break-in. MuddyWater’s February 20–27, 2026 intrusion into a major South Korean electronics manufacturer was the opposite: calm, methodical, and built to look like normal admin noise. Symantec’s takeaway was blunt: they spent a full week inside, running an intelligence-driven cyber espionage operation focused on industrial and intellectual property theft and, in some cases, access that could lead to downstream customers or corporate networks .
Here’s what that kind of week tends to look like in real environments—especially manufacturing, where IT and OT have to coexist and downtime isn’t an option.
Day 1–2: Quiet recon (they map before they move)
The first stage is boring on purpose:
- Host and domain reconnaissance (who’s who, what talks to what)
- Antivirus enumeration via WMI (what security is installed, what might block them)
- Screenshot capture (a fast way to learn what a machine “is” without touching many files)
In a plant network, that recon is gold. A single engineering workstation can hint at MES/SCADA access patterns, vendor tooling, shared drives, and which accounts “always work.”
Day 3–4: Credential theft (because passwords beat exploits)
Once they know the terrain, they go after access that lasts:
- Fake Windows credential prompts to trick users into handing over passwords
- Registry hive theft of SAM/SECURITY/SYSTEM for offline credential extraction
- Kerberos ticket abuse tools to move like a trusted user without constantly re-authing
This is where espionage becomes real. If they can blend into “normal” logons, they can sit near design files, test results, supplier docs, and email threads for days.
Day 5–6: Persistence + internal maneuvering (the “don’t kick me out” phase)
MuddyWater didn’t need loud persistence. Symantec observed:
- Registry modifications to stay present
- Beaconing at 90-second intervals (a steady heartbeat, not constant hands-on activity)
- Repeated relaunch of sideloaded binaries to maintain access
That 90-second cadence matters. It’s consistent with an implant checking in, not an operator typing nonstop . In other words: your SOC might not see “a hacker,” just a system that looks a bit chatty.
Day 7: Data theft (exfil that can pass as normal web use)
When it’s time to pull data out, they don’t always spin up an obvious exfil server. In this campaign, Symantec noted use of sendit.sh, a public file-sharing service, likely to make outbound traffic look routine .
That’s the manufacturing reality check: espionage doesn’t need to stop production. It just needs to outlast your attention span—and quietly walk away with what makes your factory valuable.
Their Favorite Door: DLL Sideloading Using Signed Binaries Your Tools Won’t Panic About
Once MuddyWater has a foothold, they don’t always drop some obviously sketchy “hacker.exe.” They like a quieter trick: DLL sideloading.
DLL sideloading, explained like you’re on a plant floor
Windows apps often load helper files called DLLs when they start. Many programs will look for those DLLs in the same folder as the app before checking safer system locations.
So attackers flip the script:
- They take a legitimate, signed EXE (the “trusted app”)
- They place a malicious DLL next to it (the “untrusted passenger”)
- When the EXE launches, it accidentally loads the attacker’s DLL
Security tools and analysts are trained to trust signed binaries more than random unsigned programs. MuddyWater uses that defender instinct against you. Symantec described their campaign as relying heavily on this exact approach—legitimate, signed software loading malicious DLLs .
The two signed-binary combos Symantec called out (and why they’re clever)
Symantec highlighted two examples that should make any defender a little uneasy, because both look “normal” at a glance :
1) Fortemedia abuse: fmapp.exe + fmapp.dll
fmapp.exeis a legitimate Fortemedia audio utility- MuddyWater paired it with a malicious
fmapp.dll
Audio utilities aren’t rare on laptops and some desktops. That familiarity is the point.
2) Security tooling abuse: sentinelmemoryscanner.exe + sentinelagentcore.dll
sentinelmemoryscanner.exeis a legitimate SentinelOne component- MuddyWater used a malicious
sentinelagentcore.dllalongside it
This one’s nasty in a different way. A process name that sounds like endpoint protection can get a free pass in a busy environment.
What that malicious DLL carried: ChromElevator (browser data theft)
In both cases, Symantec reported the malicious DLLs contained ChromElevator—a commodity tool used to steal data stored in Chrome-based browsers, including handling Chrome App-Bound Encryption decryption .
That matters because modern factories run on web apps:
- supplier portals
- email and SSO sessions
- internal dashboards for quality, inventory, and scheduling
- vendor support sites used by maintenance and engineering
If an attacker can pull browser-stored secrets, they may not need to “break in” again. They just log in like someone who belongs.
Hands-on Keyboard, Without the Keyboard: PowerShell + Node.js Loaders, Screenshots, Recon, and Tunnels
DLL sideloading is how MuddyWater blends in at the front door. PowerShell is how they get work done once they’re inside.
Symantec observed that MuddyWater still leaned heavily on PowerShell, but with a twist: the payloads were controlled through Node.js loaders rather than directly . Think of Node.js as the remote control, and PowerShell as the set of hands doing the actual tasks on your systems.
What PowerShell handled (the day-to-day operator “chores”)
Symantec’s list reads like a checklist for factory-network espionage :
- Screenshot capture (fast situational awareness without opening a bunch of files)
- Reconnaissance (system, domain, environment)
- Fetching additional payloads (pulling down the next tool only when needed)
- Establishing persistence (staying after reboots and logoffs)
- Credential theft (getting reusable access)
- Creating SOCKS5 tunnels (a stealthy way to route traffic through a compromised host)
That last one is a big deal in manufacturing networks. A SOCKS5 tunnel can turn one compromised Windows box into a stepping stone that makes internal apps “reachable” from the attacker’s side, without obvious remote desktop sessions lighting up your dashboards.
Where Node.js fits (why it’s a smart controller)
Node.js isn’t suspicious by itself. It’s common on developer machines and some IT systems.
What’s unusual is Node.js acting as an orchestration layer on endpoints that have no business running it—like shared shop-floor workstations, engineering PCs that should be boring, or jump hosts that are supposed to be locked down. Symantec explicitly noted the shift to Node.js loaders controlling the PowerShell-driven activity .
What defenders should actually look for (signals you can hunt this week)
You’re not hunting “PowerShell exists.” You’re hunting patterns.
PowerShell execution that doesn’t match your environment
Look for:
- PowerShell spawning from unexpected parent processes (especially “utility” apps and odd signed binaries)
- PowerShell that shows up on systems that normally never run scripts
- Bursts of PowerShell activity that line up with recon + download behavior (short, repeatable clusters)
Node.js where it doesn’t belong
Hunt:
node.exe(or Node runtimes) running on non-dev endpoints- Node processes that appear shortly after a suspicious app launch, then stick around
Cadence that screams “implant,” not “person”
Symantec observed beaconing at 90-second intervals and noted that cadence fits implant-driven activity rather than continuous operator presence .
That’s your clue to correlate:
- repeated outbound connections on a tight timer
- periodic process wake-ups that look “too regular to be human”
Re-launch behavior that keeps a foothold alive
Symantec also saw sideloaded binaries repeatedly relaunched to maintain access . Translation: if you keep seeing the same “normal-looking” executable reappearing after termination, treat it like persistence—even if the filename sounds harmless.
How They Steal Access: Fake Prompts, Hive Theft, Kerberos Abuse (and Why Browser Data Matters)
All that automation—PowerShell tasks, scheduled check-ins, quiet tunnels—only works long-term if MuddyWater can keep getting in like a real user. Symantec saw them do that with a stacked approach: fake Windows prompts, registry hive theft (SAM/SECURITY/SYSTEM), and Kerberos ticket abuse tools .
1) Fake Windows credential prompts: the “just sign in again” trap
This is the simplest move, and it still works.
A user sees what looks like a normal Windows authentication prompt and types their password. Now the attacker has:
- a real username + password (often reusable across VPN, email, file shares, and plant apps)
- a credential that blends into normal login traffic
In manufacturing, this lands hard because people are trained to clear pop-ups fast to get machines running again.
2) SAM/SECURITY/SYSTEM hive theft: stealing the ingredients Windows uses to verify logins
Symantec observed MuddyWater stealing the SAM, SECURITY, and SYSTEM registry hives .
Plain-English version:
- These hives store and protect credential material (like password hashes and related secrets).
- With the right access, attackers can copy them and try to extract or crack credentials offline.
What that gives them:
- local account access that may repeat across multiple engineering workstations
- a path to escalate when admins reuse local passwords (it happens more than anyone wants to admit)
3) Kerberos ticket abuse: “I’m already authenticated”
Kerberos is what many Windows domains use to prove you’re you without sending your password around constantly.
Symantec noted MuddyWater used Kerberos ticket abuse tools . That typically enables:
- moving through Windows environments with tickets that look legitimate
- less need for noisy re-authentication
- persistence that survives until tickets expire or are invalidated
Why ChromElevator and browser data theft changes the math
Symantec also reported MuddyWater’s malicious DLLs included ChromElevator, a tool used to steal data stored in Chrome-based browsers .
That matters because browser-stored secrets can be a shortcut to:
- email and SSO sessions
- internal portals (HR, procurement, ticketing, documentation)
- vendor sites and cloud consoles tied to production-adjacent tooling
A lot of orgs treat “saved passwords in the browser” as a convenience problem. In an espionage operation, it’s an access strategy.
Detection + Mitigation That Fits Factory Constraints: What to Monitor and What to Change This Quarter
When an espionage crew can steal access quietly, you don’t win by “blocking everything.” You win by watching the right seams and tightening a few controls that don’t break production.
What to monitor (tight checklist, high signal)
These are the behaviors Symantec tied to MuddyWater’s 2026 activity—translate them into detections, not generic alerts :
1) DLL sideloading + signed-binary weirdness
- Signed EXE launching from odd paths (user profile folders, temp directories, downloads)
- A signed EXE loading a DLL from its own working directory (DLL name matches what the EXE “expects,” but the location is wrong)
- Repeated relaunch of the same executable to maintain access
2) PowerShell and Node.js execution that doesn’t match the machine’s role
Symantec observed PowerShell used for multiple tasks, with payloads controlled via Node.js loaders :
- PowerShell used for recon, downloading payloads, persistence, credential theft, and SOCKS5 tunnels
- Node.js runtime present/executing on non-dev endpoints (especially shared workstations)
3) WMI “security product inventory” behavior
- WMI queries enumerating antivirus/security tooling (AV enumeration via WMI was called out early in the intrusion)
4) Screenshot artifacts (espionage tells on itself)
- Unusual screenshot activity can be a dead-simple signal; Symantec observed screenshot capture as part of their workflow
5) Tunneling + beacon patterns
- Network signs of SOCKS5 tunneling behavior (proxy-like connections from endpoints that shouldn’t proxy)
- 90-second beaconing cadence—regular, machine-like check-ins
6) Exfiltration to “normal-looking” services
- Outbound traffic to sendit.sh (Symantec noted it was used for data exfiltration to blend in with normal traffic)
What to change this quarter (hardening that won’t stall the line)
Application control for signed binaries (yes, even trusted ones)
- Build allow-lists around where signed binaries can run from, not just if they’re signed.
- Flag/deny signed binaries executing from user-writable locations.
PowerShell controls that keep admin work possible
- Turn on script block logging and module logging where feasible.
- Use Constrained Language Mode on endpoints that don’t need full scripting.
- Alert on PowerShell making outbound connections or pulling payloads from the internet.
Node.js allow-listing
- If Node.js isn’t required on a box, it shouldn’t be there.
- Where it is required (dev/IT tooling), restrict execution to known paths and signed installers.
Reduce browser-stored credential exposure
Symantec tied the campaign to theft of Chrome-based browser data , so treat browser secrets like credentials:
- Disable password saving where possible on high-risk roles (engineering, IT admins, vendor access users).
- Push password managers with policy controls instead of “save in browser.”
Tiered admin model (limits how far one stolen credential travels)
- Separate identities for: workstation admin, server admin, domain admin.
- Don’t allow high-privileged accounts to log into everyday endpoints.
Egress controls that match factory reality
- Block known public file-sharing destinations where business doesn’t need them; at minimum, alert on them (sendit.sh is a concrete example)
- Force outbound web from sensitive segments through a proxy that logs destinations.
Where Cloaked fits (low friction, reduces blast radius)
A lot of manufacturing risk comes from reusable identifiers floating around: vendor portals, support logins, trial tools, random SaaS tied to engineering teams.
Cloaked is useful here in a practical, non-flashy way:
- Use masked emails and phone numbers for vendor access and internal sign-ups, so a compromised portal doesn’t hand attackers a real identifier they can reuse elsewhere.
- If a vendor gets breached (or gets phished), your team’s real contact points and recovery channels aren’t automatically exposed.
It doesn’t replace MFA or segmentation. It just shrinks the “credential reuse” surface area that espionage groups love to exploit.



