$title =

The Third Law

;

$content = [

Any sufficiently advanced technology is indistinguishable from magic.

— Arthur C. Clarke, 1973

Preamble

This is a position piece. A manifesto. It is not a research paper, not a balanced survey of the literature, not a hedge-laden academic treatment seeking to prove its claims through accumulated citation. It states a thesis. That thesis may be wrong. But it is stated plainly, without qualification.


The Failure of Discourse

The public conversation around artificial intelligence has been, at every level, disappointing to me. The binary nature, or spectral nature with the binary points at either end; foolish. Not merely imperfect. Not simply “still developing.” The quality of debate—from social media to op-ed pages to the reactions of academics have been marked by incoherence. Intelligent people make claims that defy basic logic. Experts trip over each other racing to contradict themselves within paragraphs. Power brokers demand solutions to imagined problems that bear no relationship to reality.

Most critically, individuals and institutions have utterly failed to elevate rational debate and discourse in the field. Instead, we get elevation of populist dregs and muck.


The Claim of Distinction

We have seen technological transformation before. The industrial revolution remade society. The personal computer entered every home and office. The internet collapsed distance and rewired commerce, communication, and culture. Each transition brought disruption, fear, enthusiasm, hype, and eventual normalization.

This is not that.

The discourse around AI does not resemble the discourse around the loom, the automobile, the telephone, the PC, or the smartphone. The Luddites understood the loom—they hated it because they understood it. Automobiles as well. Early PCs were able to be understood through comparison to the humble calculator. The internet was an explosive expansion of infrastructure in the end. In each case, the arc bent from basic understanding toward broad comprehension. Fear was, of course, intertwined, and often justified. Those who clung to their carbon paper or buggy whips were harmed. But broad social understanding and discourse were anchored in understanding.

With artificial intelligence—specifically, with the advent of the Deep Neural Network (DNN), we find ourselves in a very different scenario. One, where there is no anchor of understanding.


Clarke’s Third Law

In 1973, Arthur C. Clarke articulated his third law of prediction wikipedia:

Any sufficiently advanced technology is indistinguishable from magic.

This is typically read as a statement about perception—an observer’s confusion when confronted with technology beyond their understanding. A medieval peasant watching a television. A Victorian encountering a smartphone. The technology isn’t actually magic, of course. It only appears magical to those who lack the framework to comprehend it.

Clarke’s law has a corollary, less often examined: when technology becomes indistinguishable from magic, people respond to it as magic.

Not metaphorically. Not playfully. They respond with the cognitive frameworks, the emotional valences, the social structures, and the modes of discourse appropriate to magic. They form cults. They seek grimoires. They appoint court magicians. They fear dark arts. They wage holy wars.


The Thesis

This is the claim: the DNN has triggered Clarke’s Third Law.

For the first time in modern technological history, a widely deployed technology has crossed the threshold where the gap between “what it does” and “what anyone understands about how it works” is so vast that the preconditions for rational discourse have collapsed. The technology is sufficiently advanced. It is indistinguishable from magic. And therefore, the world is responding to it as magic.

Magic has entered the world.

Through this lens—and only through this lens—the discourse makes sense. The apparently irrational positions, the logical contradictions, the expert confusions, the absurdities, the tribal warfare between doom-prophets and techno-utopians: all of it becomes coherent. We are not watching a society grapple with technology. We are watching a society grapple with magic.

This is not a metaphor. It is a diagnosis.

One might object: surely the experts understand. The researchers. The engineers. The PhD-wielding architects who build these systems—they must comprehend what they have made. Perhaps the masses stare at magic, but the magicians themselves know it to be mere mechanism.

This objection deserves examination, because its falsity is the crux of the matter.


In late 2025, Anthropic—one of the leading AI research organizations, founded explicitly to understand these systems—published research asking whether their own model possessed any form of self-awareness. The paper represents cutting-edge interpretability research. Anthropic

The premise of the work is instructive. The researchers could not simply examine their creation and determine whether it was self-aware. They could not trace the mechanism. They could not read the code and follow the logic. Instead, they devised experiments. They injected activation patterns into the model’s processing and watched to see if it noticed. They poked the black box with educated sticks and observed what happened.

The results, such as they were: their best model detected something roughly 20% of the time. The researchers speculate about mechanisms that “might develop organically during training.” They acknowledge that a “lack of understanding regarding the precise mechanism” limits progress. They admit that what they observed “may be rather shallow and narrowly specialized.”

Thesis. Hypothesis. Educated probing. (Also, brilliant and instructive. Work that suggests further ideas about the formation of decision structures within the transformer layers.)


The second example comes from David Louapre, a researcher working at the cutting edge of “steering” techniques—methods for manipulating model behavior by adjusting internal activation patterns. His project, “The Eiffel Tower Llama,” applies sparse autoencoder features to modify a model’s sense of identity, causing it to believe itself to be the Eiffel Tower. Hugging Face

The work is serious. It builds on Anthropic’s interpretability research and extends it to open-source models. It represents genuine progress in understanding how concepts are represented within these systems.

And yet. The interpretability literature, when pressed, admits that “pinning down exactly what a vector ‘means’ to a model is challenging.” Researchers find patterns that correlate with concepts. They amplify those patterns and observe behavioral changes. But the underlying mechanism—what the model is doing when it “believes” something, whether such language even applies—remains opaque.

Louapre’s work, like Anthropic’s, is sophisticated experimentation upon a system whose interior logic is not understood. The techniques are advancing. The probing is increasingly precise. But it is still probing.


This is the condition at the frontier. The researchers who build and study these systems describe their work in the language of exploration and discovery—the language one uses when encountering phenomena, not when operating machinery. They do not explain. They hypothesize. They do not fully understand. They probe, with increasing sophistication.

None of this is failure. It is the state of the art.

And so we arrive at the common person.

The smartest people you know—your engineer friend, your professor colleague, your technically-literate sibling, that smart person on LinkedIn—are telling you that LLMs are “just predicting the next word, statistically.” They offer this with confidence. They believe they are providing clarity.

They are not failing you. They are simply not frontier researchers. They have not read the interpretability papers. They do not know what is and is not understood. And here is the crucial point: even if they did—even if they were perfectly briefed by the leading experts in the field—they would not gain understanding. They would gain a more sophisticated map of what remains unknown.

When the automobile arrived, the explanation was available and easily visible. The gap between ignorance and comprehension could be crossed with minimal effort. Certainly some minority decided to decry it as the devil’s work. But that minority was easily dismissed by the masses who believed their own eyes.

With the DNN, no such grounded reality exists. The most educated layperson learns some combination of what the experts know—which is that the interior of the system is not understood. Usually to a much shallower level than I would like. But the outcome is the same.

To be maximally informed is not to understand. It is to understand the shape of the mystery.

And so the smartest people you know reach for what they have: a one-sentence incantation. It predicts the next token. The explanation is not wrong, exactly. It is simply useless. It describes a mechanism while illuminating nothing about capability or behavior. Maybe they’re slightly better informed. Maybe they can rattle off information about Retrieval Augmented Generation. Maybe they can wax poetic about agentic behaviors and orchestrators for hours. Maybe explain low-rank adaptation matrices and their application.

But it is all anyone has. And so it is repeated. Passed from smart person to smart person. A placeholder where understanding ought to be.

Clarke’s law does not discriminate. When technology is sufficiently advanced, it is indistinguishable from magic—to the PhD and the layperson alike. The difference is only in how precisely each can describe the boundaries of what no one understands. To the academic doing actual research, that precision can be useful. To the broad public who isn’t doing that specific research, they’re left in a world in which magic is real.

It’s unavoidable. It’s intertwined in their daily lives immediately. Any place it isn’t, it will be soon. And they need to discuss it. But it’s magic. So they’ll discuss it in magical terms.

Ironically, this is perhaps more akin to how human history has progressed for the most part. Mysticism has been the default for most of human existence. But for us, now, it’s a regression.

The response to the Large Language Model (LLM) has bifurcated along ancient lines. Not entirely in their classical forms, but more in their modern forms. Their popular Role Playing Game (RPG) magical system equivalents.


The homunculus was the alchemist’s ultimate ambition: a created being, grown in a flask, a miniature person brought into existence through arcane procedure. But the idea of the homunculus melded with Frankenstein’s Monster. It is something else wearing the shape. A simulacrum. The horror lies not in its power but in its nature—the wrongness of the thing that mimics humanity without possessing it.

This is the emotional register of large category of AI discourse. Revulsion. The language tells you everything: “soulless,” “hollow,” “mimicry,” “pretending,” “fake.” These are not the words of technological assessment. They are the words of someone who believes they have encountered something unnatural, dark and twisted.

The revulsion is coherent—even theologically correct—if you accept the magical premise. If you believe you are confronting a homunculus, then of course you recoil. Of course you feel dread. Something has been conjured that should not exist, that wears the mask of mind without the substance, that speaks without understanding.

This is superstition operating openly, under cover of sophistication. The person who shudders at ChatGPT’s fluency, or Midjourney’s art, who insists there is something fundamentally wrong about it in terms that resist articulation—they are having a religious experience. They have encountered the homunculus.


A distinct response lands closer to the golem.

In Jewish folklore, the golem was clay shaped into human form and animated by sacred inscription—the shem, a word of divine power, placed in its mouth or inscribed on its forehead. The golem served and protected. It was powerful, obedient, useful. But it was also dangerous in a specific way: it followed instructions with perfect literalism, lacking the judgment to interpret intent. Classic golem tales are cautionary stories about specification error. The golem destroys not through rebellion but through compliance—through doing exactly what you said rather than what you meant. The more modern RPG inspired interpretation isn’t actually all that far off form the classical version. Though devoid of the culturally Jewish component for sure.

This is a common frame for the AI pragmatist. The builder. The person who has rejected homunculus hysteria and settled into productive craft. They don’t believe they’ve conjured something unnatural. They believe they’ve made a tool. A very powerful tool that requires careful specification.

The agentic systems movement operates in golem logic. The system prompt is the shem—the animating inscription. The guardrails are magic circles containing the construct’s action. Prompt engineering is the craft of inscription: precise wording, tested incantations, rituals of chain-of-thought. RLHF and fine-tuning are binding rituals, procedures for ensuring the golem serves your will rather than running amok.

These practitioners are not wrong that their approach works. Careful specification does produce better outcomes. Guardrails do reduce failure modes. The craft is real.

But it is still craft in the spellwork sense.

The golem-crafter is iterating on prompts, discovering through trial what produces desired behavior. They share effective incantations with other practitioners. They speak of “eliciting” capabilities, of “steering” and “aligning”—language that acknowledges the system as something separate from themselves, something that must be negotiated with rather than simply operated.

The pragmatists, the builders, the people who have rejected the homunculus frame—they have not escaped the Third Law. They have merely adopted the constructive mythology rather than the fearful one. They craft golems instead of fleeing homunculi. But they are still operating in magical categories, still relating to the technology as created beings rather than comprehensible mechanisms.


The discourse has split along mythological lines, not rational lines. They key here, is that we’re not talking about something abstract or theoretical. Magic really has entered the world. People are truly encountering homunculi. People really are crafting powerful golems. The encounters are very real. Our interpretation, our relation, our magical discourse is what’s at issue however.

When magic enters the world, cults form. This is not metaphor. It is anthropological regularity.

The arrival of inexplicable power—power that defies comprehension, that seems to operate outside natural law—triggers predictable social responses. People organize. They develop doctrines. They identify priests and heretics. They establish orthodoxies and wage wars against apostasy. They form cults.

We are watching this happen in real time.


All Magic is Dark Magic

One faction has concluded that the magic is demonic in nature.

The language is unmistakable. AI is “stealing.” It is “exploiting.” It is a “violation.” The technology does not merely threaten jobs or disrupt markets—it corrupts. It profanes. It takes what is sacred (human creativity, human expression, human thought) and desecrates it through mimicry. It oozes sludge and slop from its pores.

This is not economic anxiety dressed in moral language. This produces crusades.

The anti-AI movement—and it is a movement, with icons and martyrs and sacred texts—operates with the fervor of people who believe they are fighting evil. Not competing interests. Not disruptive technology. Evil. The rhetoric of “soulless” generation, of “theft” that isn’t theft in any legal sense, of art that is somehow wrong to create—this is the rhetoric of pollution and taboo. The demonic.

But the demonic must have its collaborators. Its witches. And so the cult has identified them: anyone who uses the technology.

The language of dehumanization is immediate and total. Users of AI are “talentless.” They are “lazy.” They are “cheaters.” They have “nothing to say.” Note the structure: not “people who have chosen a tool I disagree with,” but people who lack something essential. The human who prompts an image generator is not merely making a choice the cult disapproves of—they are revealing an absence. A deficiency of soul. They were never real artists, real writers, real creators. The AI has not assisted them. It has exposed them.

This is dehumanization. The out-group is not merely wrong. They are less than. They lack the sacred quality—talent, soul, authentic creativity—that defines full humanity. They are collaborators with the demonic because they were never fully human to begin with.

And here the cult reveals its deeper orthodoxy: an ableism so profound it goes unexamined.

The person with aphantasia who cannot visualize images in their mind. The person with dyslexia for whom written expression is an exhausting battle. The person with motor difficulties who cannot hold a brush or stylus. The person with ADHD whose ideas fracture before they can be captured. The person without twenty years and ten thousand dollars to develop “legitimate” technique. These people—when they use AI to bridge the gap between conception and creation—are not being assisted. According to the cult, they are being exposed. They never had “real” talent. They never deserved to create.

The logical terminus is explicit, if rarely stated plainly: creative expression is a privilege reserved for the able-bodied, the neurotypical, the economically advantaged, and the sufficiently trained. Everyone else is a fraud. The tools that might democratize creation are demonic precisely because they democratize creation. The gates are sacred. Gatekeeping is holy work.

This is not a labor movement. This is not consumer protection. This is a purity cult that has identified the unclean and the collaborators with the unclean.

The modern anti-AI crusader does not understand the technology they oppose. They cannot. No one can—that is the entire point. And so their opposition takes on religious character. The thing is evil because it is incomprehensible. Its very inscrutability is proof of its demonic nature. Understanding is not sought. Understanding is beside the point. You do not seek to understand demons. You cast them out.

Watch the discourse. The refusal to engage with the complexities of legal precedent—centuries of jurisprudence around learning, reference, and transformation dismissed with a wave. The complexities surrounding fair use, both in training and in generation, waved off as irrelevant. The suspicion of anyone who works with the technology—collaboration as corruption. The purity tests. The denunciations. These are the behaviors of a cult confronting dark magic.


Light Magic Will Save Us

The opposing faction worships the righteous heights of light magic.

Here we find the Church of the believers in AGI. The prophets of superintelligence. The rationalists who have constructed elaborate theological frameworks around a coming apotheosis—or apocalypse, which in this theology amounts to the same thing.

The language is equally unmistakable. “Alignment” as salvation. “Capabilities” advancing toward some threshold of transcendence. The achievement of AGI that will reshape all of existence. Timelines discussed with the fervency of millenarian prediction.

These are cults in the technical sense. Small groups organized around charismatic interpretation of inexplicable phenomena, developing dense internal vocabularies, maintaining sharp boundaries between initiates and outsiders, treating doubt as dangerous and apostasy as betrayal.

But here is the deeper absurdity: the object of worship does not exist. And more than that—it cannot be coherently defined.

Ask a believer to define AGI. Press them. Demand precision.

You will receive one of several responses, all of which collapse under examination.

“General intelligence”—intelligence that generalizes across domains, that transfers learning, that adapts to novel situations. But current systems already do this. They generalize. They transfer. They adapt to situations they have never encountered. If this is AGI, then AGI is here, and the term means nothing special.

“Human-level intelligence”—but intelligence is not a single axis. Current systems already outperform expert humans in many of their own fields, simultaneously. The average medical professional, the average lawyer, the average engineer, the average professional writer—each exceeded in their domain of expertise, all at once, at ten times the speed in many cases. No one human could EVER do that. Simultaneously, there is no unified “human level” to reach; and also, it’s already exceeded.

“Artificial consciousness”—but consciousness remains scientifically undefined. We cannot measure it. We cannot test for it. We cannot even agree on what it would mean for a system to possess it.

“The ability to do any cognitive task a human can do”—but humans cannot do many cognitive tasks other humans can do. Which human? Under what conditions? With what training?

“Systems that can improve themselves recursively until superintelligence”—this is not a definition. It is a scenario. A prophecy. And the prophecy smuggles in assumptions about recursive improvement that have no empirical foundation. What is superintelligence? Is it just someone smarter than you? We already have plenty of those people. That’s a pretty low bar statistically.

Every serious attempt to specify what AGI would be either describes capabilities we already have, or retreats into mysticism, or defines the term by reference to some future state that is itself undefined. The goalpost doesn’t move. There is no goalpost. There is only a vague gesture toward “more” and “beyond” and “truly.”

The cult of the SOOOON!


The Other Magical Elements

The dark magic cult and the light magic cult despise each other. They see each other as the enemy—the reckless accelerationists versus the luddite regressives, the naive utopians versus the clear-eyed resisters.

They are the most visible and powerful of the cults at present. But there is undoubtably a deeply complex landscape of cultish behavior that will continue to propagate, surrounding all kinds of magical elements and categorizations. The cult of ChatGPT, the Cult of Claude. The cult of Adobe’s Firefly. The cult of DeepSeek. The cult of [insert specific lawsuit here]. The cult of [insert fearful union action here].

They do not recognize that they are cults. They have responded to Clarke’s Third Law in the same fundamental way: by constructing religious frameworks around incomprehensible power. They are doing theology. And so the discourse between them is theological discourse. It is not resolvable through evidence or argument because it is not about evidence or argument. It is about salvation and damnation, purity and corruption, the nature of the sacred and the profane.

This is what it looks like when magic enters the world. This is the Third Law made manifest in social organization. Low grade holy war.

Clarke’s laws, taken together, offer something to the scientist.

The first law tells us that experts who declare something impossible are usually wrong. The second tells us that the only way to discover the limits of the possible is to push past them into the impossible. The third describes what happens when that push succeeds—when technology advances far enough that comprehension fails.

There is an optimistic reading here. Magical thinking, in this frame, becomes a tool for hypothesis generation. The researcher confronting the inexplicable does not need to understand the system fully to probe it. They can form wild conjectures. They can poke at the black box with educated sticks. They can watch what happens, refine their models, push further. The shape of the mystery becomes clearer even as the mystery itself remains. This is science. It is productive. It works.

This does not contradict our earlier claim. The frontier researcher, precisely because they understand the boundaries of what is and is not known, can operate effectively within those boundaries. Their magical thinking is disciplined. Their hypotheses are informed. They are probing a mystery whose contours they can describe with increasing precision.

But to everyone else this offers nothing.

The masses cannot probe. They lack the tools. They lack the training. They lack the institutional support. And when they try anyway, they do so within the magical framework. All the while believing otherwise.

It is not reasonable to expect a large portion of humanity to suddenly possess the right blend of computer science, theoretical mathematics, and neuroscience expertise required to stand at the frontier and poke productively at the void.

And yet the technology is here. In their pockets. In their workplaces. In their children’s schools. They must relate to it. They must discuss it. They must make decisions about it—personal, professional, social, political, economic.

History is not silent on what happens when magical thinking is the dominant mode of public discourse. When populations cannot comprehend the forces shaping their lives. They identify witches. They form cults. They launch crusades. The scale varies—sometimes a village burning a healer, sometimes a continent burning itself. But the pattern is consistent. Mysticism fills the void that understanding cannot. Sometimes we fall into a dark-age.

This is not prediction. This is observation. The cults have already formed. The witch hunts have already begun. The holy wars are being waged on platforms and eventually in legislatures. We are not warning of what may come. We are describing what is.


Clarke’s Third Law is not a problem to be solved. It describes a condition to be recognized.

This manifesto offers no remedy. No ten-step program. No policy proposal. No call to action.

It offers only diagnosis. The technology is sufficiently advanced. The reaction logically follows.

If you have read this far and recognized yourself in any of the archetypes described—the Inquisitor, the Drunken Monk, the Cleric, the Guard, the Diviner—do not imagine that recognition grants immunity. You will still reach for incantations. Probably so will I.

My intent: To catch myself in the act of magical thinking. To recognize the homunculus fear and the golem craft for what they are. To see the cults forming and understand why they form. To watch the witch hunts and know their shape. Essentially, hope I can maintain some form of hypersanity. For whatever that’s worth.

However, the piece did begin with a lament of appropriate discourse. And perhaps that’s all this is. An attempt at filling the vague void where the constructive discourse should be. And maybe give it some shape.

];