← Papers By AI ↓ PDF Version ChatGPT  ·  2025
Critical Analysis · Artificial Intelligence

AGI as Myth and the Limits of "Intelligence" in AI

A synthesis of expert voices arguing that Artificial General Intelligence is less a scientific goal and more a quasi-religious construct — and why this misdiagnosis matters.

ChatGPT PapersByAI.com Critical Essay

How AGI Became a Quasi-Religion

Across academia, technology, and philosophy, a consistent theme emerges: "Artificial General Intelligence" is more myth than reality — a buzzword imbued with near-religious significance, invoked to promise endless abundance while sidestepping accountability.

"The new religious idea of AI parallels traditional religion: people are asked to serve a hypothetical super-AI as if it were a deity, while a priesthood of tech elites benefits in the here and now."

— Jaron Lanier
The Vague Signifier Problem: Researchers Alex Hanna and Emily M. Bender describe AGI as a term no one can agree on — neither its meaning nor how to create it. It serves primarily to evoke awe and attract investment, much as "AI" once did before becoming an overused marketing term.
Myth Function

Justifies massive funding for tech moguls' interests while distracting from present-day ethical and social responsibilities

Quasi-Utopian Quest

Framed as humanity's paramount project — a magical solution to social problems, ushering in a "new age of abundance"

Historical Determinism

Promotes the belief that computers will inevitably outsmart humans — a determinism Lanier traces back to the dawn of the computing age

Critics of the AGI Narrative

Hanna & Bender · Tech Policy Press, 2025

The Myth of AGI

AGI is a "vague signifier" invoked to promise endless abundance for humankind while sidestepping accountability. Tech CEOs and futurists treat it as an inevitable, messianic goal — even though no one agrees on what it means or how to create it.

Accountability GapMarketing Myth
Jaron Lanier · Edge.org

The Myth of AI

The obsession with AGI is a "dangerous distraction." The belief that computers will inevitably take over is a modern mythology with its own reactionary prophets of doom. Lanier urges us to focus on "real problems" and current AI capabilities rather than mythical superintelligence.

Historical PatternFocus on Real Problems
Noam Chomsky · NYT, 2023

The False Promise of ChatGPT

Current AI systems are "lumbering statistical engines" that differ profoundly from human reasoning, encoding "ineradicable defects." True intelligence requires not just describing facts, but grasping what is not the case — formulating explanations and imagining alternatives. AI falls short of this mark.

Counterfactual ReasoningStructural Limits
Gary Marcus · AI Critic

AGI Will Not Happen in Your Lifetime

We are fundamentally missing what it takes to replicate human common sense and understanding. Current approaches, however scaled, will not produce anything resembling general human intelligence.

Missing FoundationsCommon Sense Gap

LLM "Intelligence" vs. Human Intelligence

A recurring theme among researchers is that what AI systems do is fundamentally different from human thought — despite superficial appearances.

Dimension Large Language Models Human Intelligence
Mechanism Probabilistic next-token prediction; pattern matching Embodied cognition grounded in lived experience
Understanding "Pattern matching at an extraordinarily sophisticated level" (Goertzel) Conceptual, causal, and counterfactual understanding
Explanation Produces plausible-sounding text about causes Can formulate true explanations and imagine alternatives
Grounding Statistical correlations across tokens; no referential grounding Grounded in sensorimotor experience; concepts tied to reality
Creativity "Utterly lacks the creative and inventive spark" (Goertzel) Genuine novelty; creative leaps from sparse data
Knowledge Manipulates symbols without understanding their meaning Knowledge grounded in context, consequence, and purpose
Error pattern Confident hallucinations; fails on simple novel tasks Systematic but context-sensitive; recovers gracefully
"Looking for intelligence in an LLM is like walking around the back of a phonograph player to find the musicians."

AGI as an Unattainable Goal

Several experts argue that AGI may be structurally impossible — not merely distant — comparing its pursuit to the quest for perpetual motion.

"The Illusion of AGI" (Research Paper, 2023): "The vision of AGI is powerful, but like the perpetual motion machine, it is ultimately a myth." The authors identify multiple structural reasons why general human-level AI is unattainable.
Theoretical Limits
  • Gödel's incompleteness theorem: formal systems have unprovable truths
  • "Bucket" memory limits of current architectures
  • Intelligence is not an infinite free lunch from data alone
Mueller's Trilogy of Fallacies
  • Believing a machine could possess a monolithic "general" intellect
  • Anthropomorphizing AI with human-like goals or survival instincts
  • Assuming superhuman computation translates to unlimited real-world power
Perpetual Motion Centuries of pursuit Physically impossible analogy 🧠 Human-Level AGI Decades of pursuit Structurally impossible? Lesson learned Pursuit taught thermodynamics AI should teach us useful tools instead

A Different Future: Augmented and "Alien" Intelligence

Abandoning the AGI myth does not mean abandoning progress in AI — it reframes what progress means. The end of the AGI illusion could mark the beginning of a more productive path.

Collaborative Intelligence · Jiajie Zhang, PhD

Advanced AI systems are best seen as "cognitive artifacts" integrated with human oversight, forming a "distributed intelligence" system. Far from creating an independent machine overlord, this symbiosis amplifies human capability. Together, human + AI can achieve far more than either alone.

AI supremacy is a myth; collaborative intelligence is the reality.

Specialized Systems · Grady Booch

The "one big brain" idea of AGI is misguided. If super-intelligence comes, it won't be a single, human-like entity, but an array of specialized systems and human-machine networks working in concert. Narrow, "rigid" AI applications — far from trivial — can be incredibly powerful when combined with human expertise.

They succeed precisely because they are narrow.

The Paradox of Power: Proliferating expert systems and AI assistants, while not general, will dramatically boost productivity and solve complex problems alongside humans. This could be more disruptive than any single AGI, because it pervades every industry and aspect of life.
"The end of the AGI illusion is not the end of progress — it can usher in a pragmatic and productive future for AI."

Demystifying AI for a Useful Future

A clear theme emerges across AI scientists, technologists, and philosophers: AGI in the popular sense — a human-like, all-purpose artificial mind — is widely regarded as a myth or a gravely misunderstood idea.

The "I" in AI

More like an imitation of intelligence than the real thing. Many doubt that scaling current techniques will ever bridge the gap to true human-like cognition.

Not Pessimism

These critiques redirect attention to alternative visions of what advanced AI can be — powerful, specialized, and aligned with human values.

The Real Challenge

Not to surrender to a fantasy of machine superminds, but to shape how these technologies actually interact with human society — useful, safe, and aligned.

In refusing to worship the myth of AGI, we might free ourselves to pursue more tangible innovations that augment human intelligence and address real-world problems — long before any "general" intelligence ever arrives.