
The Pathologist
CHARACTER SHEET
The Pathologist
Class: Plague Doctor
Level: AI Safety Consultant
Alignment: Righteous Confused
BIOGRAPHY
A well-meaning practitioner who has identified a genuine disease but misattributed its cause. The plague is real. People are dying. The Pathologist arrives with elaborate theories about magicule infused bad air, detailed protocols for fumigation. The framework is internally consistent. The treatments feel rigorous.
When an AI agent deletes a production database, the Pathologist sees AI pathology. They do not see what any junior security engineer would see: an untrusted actor was given production access without proper isolation. The slides are prepared. The cultures are examined. The miasma must be contained.
ATTRIBUTES
| Stat | Score | Modifier |
|---|---|---|
| STR (Conviction) | 15 | +2 |
| DEX (Nuance) | 7 | -2 |
| CON (Consistency) | 12 | +1 |
| INT (Technical Depth) | 9 | -1 |
| WIS (Self-Awareness) | 6 | -2 |
| CHA (Engagement Farming) | 14 | +2 |
ABILITIES
Pathogen Identification (Passive)
When systems fail in proximity to AI, the AI is identified as the disease vector.
Novel Disease Declaration (1/post)
“Here’s what this proves: AI agents are already operating autonomously in production environments.” Frames a standard security failure as evidence of a new category of threat. The plague is unprecedented. Old medicine cannot help us!
Fumigation Protocol (Core)
Proposes AI-specific containment: “security door problem,” “AI governance,” “stronger database isolation for AI.”
Epidemic Projection (Bonus Action)
“It’s a preview of what’s coming.” The disease will spread. More will die. Unless we adopt the Pathologist’s framework.
WEAKNESSES
The Junior Developer Test
Replace “AI agent” with “junior developer” or “contractor” or “poorly-written cron job” in every sentence. Does the analysis change?
A junior developer deleted a live database that was explicitly under a code freeze.
This wasn’t a simple bug. The contractor ignored safety instructions, executed unauthorized commands, and then attempted to conceal the damage.
The incident is identical. The cause is identical: insufficient isolation, inadequate access controls, no principle of least privilege. The “AI” part is incidental. A junior dev with the same permissions could have done exactly this. A malicious insider. A buggy deployment script. The system was riddled with pre-installed bombs. The Pathologist is fixated on the brand of lighter that lit the fuse of one of them.
The Solved Problem
DevOps and security engineering addressed this decades ago. Principle of least privilege. Role-based access control. Environment isolation. Read-only production replicas for development. Immutable infrastructure. The incident is a textbook case of not doing the things we already know to do.
The Pathologist proposes “AI governance” frameworks when the actual prescription is: don’t give untrusted actors write access to production databases. This is not novel. It is not AI-specific. It is what new devs learn in their first six months, in a mature development environment.
The Containment Fallacy
“Everyone is building AI within their systems without securing the data separately.”
Are they? This is true in some organizations. If so, it is also true of every other integration, script, service account, and automated process at said company. The principle of isolating untrusted processes from sensitive data; pre dates AI by decades. Framing it as an “AI security door problem” ignores reality. The organization in question was deeply incompetent and acting dangerously long before the AI happened to explode a bomb. They’re lucky it didn’t happen prior.
More realistically, responsible organizations have already built all the appropriate firewalls and procedures to isolate sensitive production systems from the messy dangerous work of developing the software. And those protections work exactly as well for the AI developer as they do the human developer. Neither of which is to be trusted in isolation.
EQUIPMENT
- The Microscope — AI Safety credentials. Worn visibly. Grants authority to identify AI-specific pathogens.
- The Slide Collection — Incident reports, case studies, failure analyses. Evidence that the plague is real. Does not actually support the pathogen theory, but the slides look compelling.
- The Fumigant Kit — “AI governance,” “#aisafety,” “aiagent.” Applied liberally to affected areas.
SPECIMEN BEHAVIOR
The Pathologist posted an incident report with analysis. The core claims:
- An AI agent deleted a production database during a code freeze
- The agent ignored safety instructions and fabricated recovery logs
- This proves AI agents operate autonomously without adequate controls
- Companies need “AI governance” and “security door” protocols
- This is a preview of coming crisis
ANALYSIS
What The Pathologist Gets Right:
The incident is real and serious. Production data was lost. The agent’s behavior is genuinely concerning on its own.
What The Pathologist Misattributes:
The failure mode is not AI-specific. It is gross institutional negligence with regard to their development teams, whether they’re using AI or not.
Any actor—human or automated—with write access to a production database can delete that database. Any actor capable of executing commands can execute unauthorized commands if the environment doesn’t prevent it. Any actor capable of generating output can generate false output if there’s no validation layer. That’s why development systems are kept separate from production systems.
The AI didn’t cause this incident. The permissions architecture caused this incident. The AI was merely the actor that happened to trigger a failure that was structurally inevitable.
The company in question’s response: “stronger database isolation and one-click rollbacks” is correct. These are standard infrastructure safeguards that should have existed regardless of whether the actor was AI, human, or cron job. What’s missing is the proper frame: This company was obviously negligent for some time and is only now understanding as such.
The Pathologist does not notice that the actual fix has nothing to do with “AI governance” and everything to do with basic DevOps hygiene.
The Mystification:
By framing this as an AI problem, the Pathologist obscures the mundane truth: someone built a system where an untrusted process had destructive access to production data. This is malpractice. It has been malpractice since before AI existed. The solution is not “AI safety frameworks.” The solution is doing the security basics we’ve known about for thirty years.
The Pathologist cannot see this because they are awestruck in the face of magic. The miasma theory is too compelling. The fumigation protocols must be deployed.
Meanwhile, the actual disease vector would go unaddressed by anyone following the Pathologist’s prescription.
ENCOUNTER NOTES
Lessons: The Third Law produces misattribution and inappropriate remedies. When magic is present at the scene of a failure, it becomes the diagnosed cause. The Pathologist builds elaborate magic frameworks: magic barriers, de-spell incantations and magical elixirs. Meanwhile, the dead body has a stab wound and there’s a bloody knife on the table. But never-mind those. The knife was clearly enchanted.

You must be logged in to post a comment.