West Pharmaceutical’s cyber incident is a gut-check: are you ready for data theft and encrypted systems?

May 14, 2026
by
Abhijay Bhatnagar
deleteme

Most companies say they’re “prepared.” Then a real incident hits and the playbook meets reality: systems go dark, people can’t ship, leadership wants answers you don’t have yet. West Pharmaceutical disclosed a material cybersecurity incident after detecting unauthorized activity on May 4, then confirming on May 7 that data was exfiltrated and some systems were encrypted . They shut down and isolated systems globally, restricted access, notified law enforcement, and brought in Palo Alto Networks’ Unit 42 . The uncomfortable part: investigators were still working out what data actually left . If that happened to your company tomorrow morning, would you be calm—or scrambling?

The West timeline (and why those 72 hours matter)

Those three days between “something’s wrong” and “this is material” are where companies either get control fast—or lose weeks to confusion.

West Pharmaceutical said it detected an intrusion on May 4, 2026, then activated incident response protocols immediately. The early moves were blunt but practical: they took systems offline globally for containment, notified law enforcement, and engaged external cyber-forensic experts.

Then came the update that changes the stakes. West stated that on May 7, 2026, it determined it had experienced a material cybersecurity incident where certain data was exfiltrated and certain systems were encrypted.

What “material cybersecurity incident” signals (in plain English)

“Material” isn’t a vibes-based label. It’s a serious line in the sand for:

  • Executives and the board: this is now a business-risk event, not “just an IT issue.”
  • Regulators and investors: disclosure expectations kick in, and timelines get tighter.
  • Attackers watching: ransomware crews track public filings and news; “material” can influence pressure tactics.

It also implies a hard truth: by May 7, West had enough evidence to say impact was significant—even while the investigation was still developing. West noted an investigation was underway to determine the exact nature and scope and what type of data was stolen.

Translate West’s first moves into a checklist you can actually use

If you’re building a ransomware incident response plan, West’s sequence maps cleanly to four immediate priorities:

  1. Containment: be ready to shut down and isolate affected infrastructure fast (they did it globally).
  2. Access control: restrict access to enterprise systems so compromised creds can’t keep moving laterally.
  3. Notification: notify law enforcement early, even when details are incomplete.
  4. Outside help: bring in specialists for forensics + containment + recovery. West engaged Palo Alto Networks’ Unit 42 for incident response and recovery, alongside other experts and legal counsel.

That 72-hour window is the gut-check: can you isolate systems quickly, control access, preserve evidence, and get credible help—without guessing in front of leadership?

Two bad days at once: data theft + encryption (and what’s still unknown)

When a ransomware attack hits, most teams brace for one headline: “systems encrypted.” West’s disclosure had the second punch too: data exfiltration. They stated that certain data was exfiltrated and certain systems were encrypted.

Those are two different problems, happening at the same time:

Problem #1: Exfiltration (data theft)

Exfiltration means copies of data left your network. Even if you restore every server, you still have an exposure problem.

Your immediate priorities shift to:

  • Confirm what was accessed and pulled (not what you think is “important,” what was actually touched).
  • Contain ongoing access paths (stolen tokens, VPN creds, service accounts, API keys).
  • Reduce downstream harm if the data gets shared.

West noted the investigation is still underway to determine the exact nature and scope of the incident and the type of data the attacker stole. That’s normal—and it’s the part that makes leaders uncomfortable.

Problem #2: Encryption (loss of internal access)

Encryption is operational pain: systems stop, workflows break, teams start improvising in spreadsheets and personal email.

What matters early:

  • Triage what’s encrypted (endpoints vs servers vs hypervisors vs backups).
  • Protect recovery paths (don’t let the attacker hit your restore environment next).
  • Preserve evidence so you can answer “how did they get in?” later without guessing.

The “unknowns” you can’t bluff on day one

A lot of companies burn trust by overpromising in the first 24–72 hours. If you’re being honest, you usually don’t know:

  • Exactly what data was taken (West explicitly said this is still being determined).
  • How far the attacker moved inside the network before encryption.
  • Whether the stolen data will surface (leak site, private extortion, or quiet resale).

West also said it took steps to mitigate the risk of dissemination of the exfiltrated data, without detailing what those steps were. That line is a reminder: even after containment, you’re still playing defense against what’s already out the door.

Operational reality: keeping the business alive while security cleans up

After the initial shock, the fight turns into something less dramatic and more exhausting: getting the company functional again without re-infecting yourself.

West’s update is a clean example of what “recovery” often looks like in real life: they said they restored core enterprise systems that support shipping and manufacturing operations, and that manufacturing has been partially restarted. They also said complete restoration hasn’t been achieved and they didn’t provide a timeline for full restoration.

That combination matters. It implies a few things that most leadership teams underestimate:

  • Core systems restored” doesn’t mean “business as usual.”
  • Partial restart” is code for uneven throughput, manual workarounds, and constant prioritization calls.
  • No timeline” is often the honest answer when you’re rebuilding safely instead of rushing.

What “core” should mean in a ransomware recovery plan

If your incident response plan can’t define “core,” you’ll waste days in internal debates.

A useful definition is: the smallest set of systems required to ship product, invoice it, and keep people safe.

For many manufacturing orgs, that usually includes:

  • Identity and access (AD/SSO, MFA, privileged access workflows)
  • Network services (DNS/DHCP, core routing, VPN with strict controls)
  • ERP + order-to-cash (orders, inventory, finance)
  • Shipping (WMS/TMS, label printing, carrier integrations)
  • Manufacturing execution (MES/SCADA gateways, historian access paths—handled carefully)
  • Email/collaboration (sometimes delayed if it’s a risk magnet)

A tactical restore order that doesn’t get you burned

You’re balancing two clocks: operational downtime and attacker persistence.

Use a restore sequence that forces safety gates:

  1. Build a clean “recovery bubble”
    New admin workstations, separated network segments, tight egress rules.
  2. Re-establish identity carefully
    Reset privileged creds, rotate secrets, re-issue admin access with short-lived elevation where possible.
  3. Restore the business spine before the limbs
    ERP + shipping often come before broad end-user restore, because they unblock revenue fastest.
  4. Validate before reconnecting
    Don’t just “bring servers back.” Prove they’re clean: EDR coverage, patched images, known-good configs, and logs flowing.

Plan for messy operations, not a clean rebound

West’s “partial restart” is a reminder: recovery is rarely a single flip of a switch.

Set expectations internally:

  • Some sites will be up while others are waiting on rebuilds.
  • You’ll run manual processes longer than you want.
  • You’ll revisit priorities daily as systems come online and new forensic findings land.

Outside responders, silent attackers, and what you should do before you need them

Once you’re past the first round of restore work, a different problem shows up: you can’t verify your own story fast enough. That’s where outside incident response teams earn their keep.

West said it engaged Palo Alto Networks’ Unit 42 for incident response, containment, and recovery, working alongside other external experts and legal counsel. That combo isn’t window dressing. It’s how companies keep the technical work, disclosure, and legal risk from stepping on each other.

What specialist responders actually do (in plain language)

A solid IR firm doesn’t “fix everything.” They help you make fewer bad calls under pressure:

  • Scoping: Figure out what’s impacted vs what’s just noisy. Expect a lot of “we don’t know yet, here’s how we’ll confirm.”
  • Forensics: Build a defensible timeline—entry point, privilege escalation, lateral movement, exfil paths.
  • Containment guidance: Help you cut off persistence without destroying evidence you’ll need for regulators, insurers, or litigation.
  • Recovery guardrails: Review rebuild plans so you don’t restore from a poisoned source or reconnect a compromised segment.
  • Decision support: Give leadership real options (and tradeoffs) when every option feels awful.

Legal counsel being involved early matters because breach response isn’t only technical. It’s also about what you say publicly, when you say it, and what you can support with evidence.

“No one claimed it” doesn’t mean you’re safe

West also noted no ransomware group had taken credit at the time of reporting. That happens more often than people think.

Common reasons attackers stay quiet:

  • Negotiation strategy: They may be trying to pressure you privately before going public.
  • Op-sec: Some crews avoid publicity to reduce heat from law enforcement.
  • Uncertainty on their side: If their encryption run failed or access got cut, they may wait to see if you reach out.

Your move is simple: treat it like real ransomware until proven otherwise. Keep containment tight, preserve evidence, and assume any credentials present on impacted systems are burned.

What to do before you need outside help

You don’t want your first call with an IR firm to be during a crisis.

Set up the basics now:

  • Pre-negotiate a retainer or MSA (so procurement doesn’t stall you).
  • Know who can authorize the call at 2 a.m. (name + backup).
  • Keep clean admin access paths documented (IR teams can’t work if nobody can log in safely).
  • Centralize logs and system inventory so scoping isn’t guesswork on day one.

When the incident clock is running, speed matters. Paperwork shouldn’t be the thing slowing you down.

Your readiness test: what to put in place now (so you’re not guessing later)

West’s response gives you a clean way to pressure-test your own: can you isolate fast, lock down access, and bring in outside IR—without chaos? West described actions like shutdown/isolation for containment, restricting access to enterprise systems, notifying law enforcement, and engaging Unit 42 for incident response, containment, and recovery.

Here’s the readiness checklist that maps to those moves.

1) Isolation capability (the “take it down” muscle)

If ransomware is spreading, speed beats perfection.

  • Pre-built network segmentation and the ability to cut off “known bad” subnets
  • Emergency disable for VPN/remote access paths
  • A written decision tree for when to go global vs site-by-site:
    • Encryption spreading? Take down wider.
    • Unknown blast radius + privileged compromise suspected? Take down wider.
    • Single isolated host, high confidence? Keep scope tight.

2) Privileged access controls (because attackers love admin)

West specifically called out restricted access to enterprise systems.

  • Separate admin accounts, MFA, and just-in-time elevation
  • Fast credential rotation for admins, service accounts, API keys
  • Break-glass accounts that are tested, monitored, and stored offline

3) Logging + EDR coverage you can trust during a crisis

  • Centralized logs (auth, endpoint, server, VPN, cloud) retained long enough to answer “when did this start?”
  • EDR on servers and endpoints with alerting that hits an out-of-band channel (not just email)

4) Offline backups + restore drills (the part people skip)

  • Immutable/offline backups that can’t be encrypted from the domain
  • Quarterly restore tests for:
    • Identity (AD/SSO)
    • ERP/shipping/manufacturing dependencies
    • A full site “from bare metal” scenario

5) Comms + legal workflow (don’t invent answers)

West worked with external experts and legal counsel.

  • A single internal source of truth (war room doc + update cadence)
  • Pre-approved holding statements
  • Clear rules on what employees can say internally and externally

6) Outside responders: have the phone number before you’re bleeding

West engaged Unit 42. Your equivalent should be ready:

  • Retainer/MSA in place
  • 24/7 escalation path
  • Who can approve spend and sign scopes of work

Where Cloaked fits (small change, big fallout reduction)

A lot of exfiltration damage is avoidable if stolen datasets don’t contain direct identifiers.

Cloaked can help by letting teams use masked emails and phone numbers for vendor onboarding, partner portals, sign-ups, and other external workflows. If an inbox, CRM export, or ticketing system gets pulled in a breach, attackers don’t automatically get everyone’s real contact details. It’s not a replacement for IR, backups, or EDR—just a practical way to reduce the blast radius when data theft happens.

View all

How Espionage Groups Exploit Factory Networks Without Disrupting Operations

Data Breaches
by
Arjun Bhatnagar

Could Your School Be Next? What the Canvas Data Breach Means for You and Your Students

Data Breaches
by
Arjun Bhatnagar

Could Your Supply Chain Survive a Ransomware Attack Like Foxconn’s 8TB Data Theft?

Data Breaches
by
Abhijay Bhatnagar