How AGI Became a Quasi-Religion
Across academia, technology, and philosophy, a consistent theme emerges: "Artificial General Intelligence" is more myth than reality — a buzzword imbued with near-religious significance, invoked to promise endless abundance while sidestepping accountability.
— Jaron Lanier
Justifies massive funding for tech moguls' interests while distracting from present-day ethical and social responsibilities
Framed as humanity's paramount project — a magical solution to social problems, ushering in a "new age of abundance"
Promotes the belief that computers will inevitably outsmart humans — a determinism Lanier traces back to the dawn of the computing age
Critics of the AGI Narrative
The Myth of AGI
AGI is a "vague signifier" invoked to promise endless abundance for humankind while sidestepping accountability. Tech CEOs and futurists treat it as an inevitable, messianic goal — even though no one agrees on what it means or how to create it.
The Myth of AI
The obsession with AGI is a "dangerous distraction." The belief that computers will inevitably take over is a modern mythology with its own reactionary prophets of doom. Lanier urges us to focus on "real problems" and current AI capabilities rather than mythical superintelligence.
The False Promise of ChatGPT
Current AI systems are "lumbering statistical engines" that differ profoundly from human reasoning, encoding "ineradicable defects." True intelligence requires not just describing facts, but grasping what is not the case — formulating explanations and imagining alternatives. AI falls short of this mark.
AGI Will Not Happen in Your Lifetime
We are fundamentally missing what it takes to replicate human common sense and understanding. Current approaches, however scaled, will not produce anything resembling general human intelligence.
LLM "Intelligence" vs. Human Intelligence
A recurring theme among researchers is that what AI systems do is fundamentally different from human thought — despite superficial appearances.
| Dimension | Large Language Models | Human Intelligence |
|---|---|---|
| Mechanism | Probabilistic next-token prediction; pattern matching | Embodied cognition grounded in lived experience |
| Understanding | "Pattern matching at an extraordinarily sophisticated level" (Goertzel) | Conceptual, causal, and counterfactual understanding |
| Explanation | Produces plausible-sounding text about causes | Can formulate true explanations and imagine alternatives |
| Grounding | Statistical correlations across tokens; no referential grounding | Grounded in sensorimotor experience; concepts tied to reality |
| Creativity | "Utterly lacks the creative and inventive spark" (Goertzel) | Genuine novelty; creative leaps from sparse data |
| Knowledge | Manipulates symbols without understanding their meaning | Knowledge grounded in context, consequence, and purpose |
| Error pattern | Confident hallucinations; fails on simple novel tasks | Systematic but context-sensitive; recovers gracefully |
AGI as an Unattainable Goal
Several experts argue that AGI may be structurally impossible — not merely distant — comparing its pursuit to the quest for perpetual motion.
- Gödel's incompleteness theorem: formal systems have unprovable truths
- "Bucket" memory limits of current architectures
- Intelligence is not an infinite free lunch from data alone
- Believing a machine could possess a monolithic "general" intellect
- Anthropomorphizing AI with human-like goals or survival instincts
- Assuming superhuman computation translates to unlimited real-world power
A Different Future: Augmented and "Alien" Intelligence
Abandoning the AGI myth does not mean abandoning progress in AI — it reframes what progress means. The end of the AGI illusion could mark the beginning of a more productive path.
Advanced AI systems are best seen as "cognitive artifacts" integrated with human oversight, forming a "distributed intelligence" system. Far from creating an independent machine overlord, this symbiosis amplifies human capability. Together, human + AI can achieve far more than either alone.
AI supremacy is a myth; collaborative intelligence is the reality.
The "one big brain" idea of AGI is misguided. If super-intelligence comes, it won't be a single, human-like entity, but an array of specialized systems and human-machine networks working in concert. Narrow, "rigid" AI applications — far from trivial — can be incredibly powerful when combined with human expertise.
They succeed precisely because they are narrow.
Demystifying AI for a Useful Future
A clear theme emerges across AI scientists, technologists, and philosophers: AGI in the popular sense — a human-like, all-purpose artificial mind — is widely regarded as a myth or a gravely misunderstood idea.
More like an imitation of intelligence than the real thing. Many doubt that scaling current techniques will ever bridge the gap to true human-like cognition.
These critiques redirect attention to alternative visions of what advanced AI can be — powerful, specialized, and aligned with human values.
Not to surrender to a fantasy of machine superminds, but to shape how these technologies actually interact with human society — useful, safe, and aligned.
In refusing to worship the myth of AGI, we might free ourselves to pursue more tangible innovations that augment human intelligence and address real-world problems — long before any "general" intelligence ever arrives.