Most insider threats don’t look dramatic at the start. They look like an awkward termination call, a laptop that still hasn’t been shipped back, and a few accounts that “we’ll disable by end of day.” In the Sohaib Akhter case, prosecutors say the window between termination and damage was measured in hours—enough time to write‑protect databases, delete data, and try to cover tracks across roughly 96 government databases . If you work in or support an agency, this is the gut-check: would your offboarding and monitoring actually stop the blast radius, or just help you document it after the fact?
The Timeline That Matters (and Why It’s So Familiar)
Here’s the part that should make any agency, integrator, or federal contractor sit up straight: this wasn’t some slow-burn breach that took weeks to spot. Prosecutors describe a clean, brutal sequence where the gap between termination and data destruction was basically a coffee break.
What prosecutors say happened (in the order that matters)
- Past behavior was already a flashing red light. Sohaib Akhter and his twin brother had prior prison sentences tied to unauthorized access of U.S. State Department systems and theft of coworkers’ personal information.
- They were brought back into the ecosystem anyway. After serving their sentences, they were rehired as government contractors by a company supporting 45+ federal agencies and hosting government data on servers in Ashburn.
- The trigger event was administrative, not technical. When the company discovered Sohaib’s felony conviction, both brothers were terminated during an online remote meeting on Feb. 18, 2025.
- Then came the insider threat “speed run.” The Justice Department says immediately after being fired they accessed computers without authorization, write-protected databases, deleted databases, and destroyed evidence.
- Impact was broad and fast. Court documents allege they wiped roughly 96 government databases within several hours, including sensitive investigative documents and Freedom of Information Act (FOIA) records.
That’s the timeline you should be threat-modeling: remote termination → residual access → privileged actions → rapid delete/destruction.
Why this feels uncomfortably familiar inside agencies
Most environments don’t fail because nobody cares. They fail because “disable by end of day” quietly becomes the norm.
These are the weak moments this case puts under a spotlight:
- Remote access that outlives the relationship (VPN, VDI, jump boxes, saved sessions)
- Over-broad admin permissions that make destructive actions a single-person decision
- Delayed credential revocation across SSO + cloud + database + tooling
- Shared admin habits (shared accounts, reused credentials, “temporary” access that sticks)
If your offboarding checklist relies on a human to remember six consoles and three tickets while HR is still drafting the separation email, you’re not running an offboarding process. You’re gambling with the hours right after termination—the same hours prosecutors say were enough to take out 96 databases.
The Detail Everyone Skips: “How Do I Clear System Logs?”
When an insider destroys data, the next move is often simpler than people expect: hide the receipts.
In the Akhter case, prosecutors allege that immediately after deleting a Department of Homeland Security (DHS) database, they asked an AI assistant how to clear system logs . That one detail matters because it tells you the mindset: the goal isn’t just sabotage. It’s sabotage + deniability.
“Covering tracks” in plain English
Logs are the timeline you rely on after something goes wrong:
- Who logged in
- From where (IP, device, VPN path)
- What they touched (databases, tables, files)
- What they ran (admin commands, delete actions, permission changes)
Clearing logs is the equivalent of walking out of a store and trying to erase the security footage. It doesn’t “undo” the damage. It tries to block attribution, slow incident response, and muddy what you can prove in court.
The hard truth: assume admins can and will try to tamper
A lot of environments still treat logging like a nice-to-have feature on the same systems being attacked. That’s fragile.
If the same privileged access that can delete a database can also delete the audit trail, you don’t have logging. You have a suggestion.
Defensive takeaway: design logs so insiders can’t erase them
You want tamper-resistant logging that survives a compromised admin account:
1) Push logs off the box, fast
- Forward OS, database, and identity logs to a central log platform/SIEM in near real time.
- Treat “local-only logs” as temporary cache, not evidence.
2) Make log storage immutable
Aim for WORM-style retention (write once, read many):
- Analysts can search.
- Admins can’t quietly delete or backdate entries.
3) Separate duties: the people with root shouldn’t own the receipts
- The team that runs systems shouldn’t be the same team that can purge audit evidence.
- Use different accounts, different approvals, different tooling.
4) Alert on log tampering like it’s data exfiltration
High-signal detections worth wiring up:
- Logging service stopped/restarted unexpectedly
- Audit policy changed
- Gaps in expected heartbeat events
- “Clear log” actions (Windows Event Log clears, syslog rotations outside policy)
5) Collect the “outside the host” truth
Even if an insider clears a server’s local logs, you can still reconstruct the story using:
- IdP/SSO logs
- VPN/ZTNA logs
- EDR telemetry
- Cloud control-plane logs (where applicable)
That’s the mental shift: you’re not trying to “catch” an insider on one machine. You’re building overlapping sources of truth—so even a determined attempt to erase logs becomes its own alarm bell.
What Failed Operationally: Offboarding, Privilege, and “Too Much Trust”
If you take one thing from insider sabotage cases, make it this: people don’t “hack” their way into destruction when the system already hands them the keys.
In the Akhter case, prosecutors say the destructive activity included unauthorized access, write-protecting databases, deleting databases, and destroying evidence . That mix points to operational gaps we see over and over in agency environments and contractor-heavy programs.
The control gaps this kind of insider threat exploits
1) Offboarding that’s slow, manual, or split across teams
Offboarding fails when access removal is treated like a ticket queue item instead of an emergency workflow. Common “oops” moments:
- SSO is disabled, but VPN/ZTNA sessions stay alive
- User account is disabled, but service accounts, API keys, or shared admin creds still work
- App access is removed, but database-native users are untouched
If a termination is happening, access should be dropping in minutes, not “by end of day.”
2) Broad admin permissions (especially for contractors)
A contractor shouldn’t have standing permissions that allow:
- direct database deletes
- permission changes on audit/logging
- backup deletion or retention changes
Standing admin is convenient. It’s also how a single person gets a one-click blast radius.
3) No two-person control on destructive actions
If one identity can:
- approve the change
- execute the delete
- and cover it up
…you’ve built a single point of failure with a badge.
A tactical model that actually holds up under stress
Minimum viable privilege (MVP) for contractors
Set contractor access to the narrowest role that still lets them do the job:
- read-only by default for sensitive datasets
- no delete rights unless the task is explicitly a delete task
- no ability to change retention, backups, or audit settings
Separation of duties (SoD) where damage is irreversible
Draw hard lines:
- DBAs run operations
- Security owns audit pipelines
- Backup operators control restore points
- Change approvers aren’t the same people executing
Yes, it adds friction. That friction is the point.
Just-in-time (JIT) privileged access for high-risk actions
For actions like dropping schemas, deleting records at scale, or changing retention:
- access is time-bound (minutes, not weeks)
- approval is logged
- sessions are recorded
- credentials rotate after use
The litmus test
Ask your team one question and don’t accept hand-wavy answers:
“If a privileged contractor is terminated right now, can they still delete production data in the next 10 minutes?”
If the honest answer is “maybe,” you’ve found the failure mode. Now you can fix it.
Agency-Ready Playbook: 6 Controls That Shrink the Blast Radius Fast
When prosecutors describe an insider going from access to write-protecting databases, deleting databases, and destroying evidence in a short window , the lesson isn’t “be more careful.” It’s “build guardrails that work when people are angry, rushed, or malicious.”
Here are 6 controls that cut real risk fast, without a re-org.
1) Credential revocation in minutes (not “same day”)
Kill access and active sessions across your stack:
- IdP/SSO: disable user, revoke refresh tokens
- VPN/ZTNA: terminate sessions
- Privileged access tooling: revoke vault checkout, rotate shared secrets
- Cloud consoles + database-native accounts: disable or expire
Goal: after termination, there’s no “last login” window.
2) Privileged session recording + real-time alerts
For high-risk admin paths (DB admin, backup admin, hypervisor, cloud root):
- record sessions (screen/commands)
- alert on risky actions (mass deletes, permission changes, retention changes)
- block copy/paste or file transfer when feasible
This is how you stop “I was just troubleshooting” stories from wasting weeks.
3) Immutable audit trails (so evidence can’t be destroyed)
Assume the insider will try to erase proof. Build logs they can’t delete:
- forward audit logs off-host immediately
- store them in immutable/WORM-like storage
- restrict delete rights to a separate security role
The Akhter case allegations include destroying evidence . Your logging architecture has to survive that.
4) Delete protection where it counts
Treat “delete” like a production outage waiting to happen.
- require approvals for destructive DB operations
- put guardrails on DROP/TRUNCATE in production
- use soft delete / recycle-bin patterns where supported
- set backup deletion protections (separate credentials, MFA, time delays)
If someone can delete data and also delete backups, you’re one bad day away from a headline.
5) Backup + restore drills tied to RPO/RTO (not vibes)
Backups aren’t real until you restore under pressure.
- define RPO (how much data you can lose) and RTO (how long you can be down)
- run quarterly restore tests on representative systems
- verify access: can you restore if the DBA account is gone?
- document the exact steps and owners
6) Insider-focused incident response (IR) runbooks
Insider sabotage isn’t the same as ransomware. Your IR plan should include:
- immediate access freeze (identity, VPN, privileged sessions)
- evidence preservation (logs, endpoints, admin consoles)
- legal/HR coordination triggers
- laptop return handling and tamper checks (the case allegations include wiping laptops before returning them )
Monday morning checklist (leaders can assign in 30 minutes)
- Name the systems of record (IdP, VPN/ZTNA, PAM, SIEM, DB platforms).
- Pick one termination scenario and time it: “How fast can we fully cut access?”
- List every place privileged access exists (human + service accounts + shared creds).
- Identify which roles can perform irreversible actions (delete, retention change, backup delete).
- Confirm audit logs are forwarded off-host and protected from admin deletion.
- Schedule a restore drill date and require a written RPO/RTO result.
None of this is flashy. It’s the kind of operational discipline that stops an insider threat from turning into 96 wiped databases.
One Often-Missed Angle: Reducing Identity Exposure Outside Your Walls
Most insider-threat conversations stay “inside the perimeter”: VPN, admin roles, audit trails, backups.
That’s necessary. It’s also incomplete.
A lot of real damage starts outside your walls, in the places your team signed up for tools, vendor portals, demos, support chats, and SaaS trials using a real work email and a real phone number. Every one of those signups becomes a new path for:
- targeted phishing
- password reset abuse (“forgot password” loops)
- helpdesk social engineering with personal details
- credential stuffing on third-party portals that still have SSO bypasses
- SIM-swap pressure when phone numbers are used as an account recovery factor
None of this requires an advanced attacker. It just requires enough leaked data and a few good guesses.
Why this ties back to insider risk (even if the insider never “hacks” anything)
Insider incidents often include credential misuse and messy attribution. Prosecutors in the Akhter case referenced unauthorized access and the theft of credentials as part of what was proven at trial . When identities are sprayed across dozens of external services, you’ve created:
- more credentials to lose
- more places where recovery can be hijacked
- more noise when you’re trying to prove what happened
It’s also a morale issue. People use their real phone number because it’s easy, then spend the next two years getting vendor spam and sketchy “security alert” texts.
The practical fix: compartmentalize identifiers
You’re trying to reduce how often an employee’s real identifiers are used as keys to critical access.
Good patterns:
- Use role-based addresses for vendor accounts (not personal inboxes).
- Avoid SMS-based recovery for sensitive admin paths.
- Separate “evaluation” accounts from production admin identities.
- Rotate and retire vendor accounts during offboarding, same as internal accounts.
Where Cloaked fits (as a supporting control)
For teams that keep getting pulled into “just sign up with your email and phone,” Cloaked can help by providing masked emails and masked phone numbers for vendor signups and external tools.
That does two useful things:
- Limits phishing exposure tied to an employee’s real email/number.
- Makes it harder to abuse account recovery flows that rely on those identifiers.
It’s not a replacement for strong offboarding or privileged access controls. It’s a way to stop your organization’s identity footprint from spreading into places you don’t monitor, can’t log, and won’t remember until something breaks.



