The Fundamental Fork in AI Futures
AI represents a singular technological transition — potentially the most transformative in human history. Leading proponents (represented by Kokotajlo's AI 2027 scenario) argue superintelligence within years will radically reshape every dimension of human civilization.
Key Claims
- Smooth scaling toward human-level and beyond-human intelligence
- Recursive self-improvement enabling rapid capability leaps
- Economic and strategic dominance for whoever achieves it first
- Existential risk if development is not carefully managed
AI is powerful but fits historical patterns of transformative technology adoption. Narayanan and Kapoor's AI as Normal Technology argues that AI, like electricity or the internet, will be absorbed into society through gradual, institution-mediated diffusion.
Key Claims
- AI capabilities are impressive but patchy — reliable in narrow domains, fragile broadly
- Human oversight and institutional inertia will pace adoption
- Historical tech adoption patterns predict gradual integration
- Present risks (bias, inequality, misuse) matter more than speculative futures
Where Each Vision Breaks Down
| Vision | Strength | Blind Spot | Core Risk |
|---|---|---|---|
| Transformative | Takes exponential change seriously; prompts meaningful safety research; prevents complacency | May overestimate AI's autonomous agency and underestimate human institutional capacity to shape outcomes | Determinism & Despair |
| Normalization | Grounds discussion in evidence; highlights current harms; warns against speculative over-reaction | Innovation speed may compress diffusion timelines; competitive races may bypass safety | False Sense of Security |
| Both | Capture real aspects of AI's trajectory | Neither accounts for the co-evolutionary dynamics between AI and human institutions | Incomplete Model |
Beyond the Extremes: A Third Path
The paper proposes five principles for a nuanced framework that captures AI's multi-faceted reality without falling into techno-utopianism or techno-minimalism.
Instead of viewing AI as an external force or mere continuity, treat it as co-evolving with society. Outcomes are not pre-determined by technology's internal logic, nor solely by human intentions, but by the interaction of the two through feedback loops.
AI could have impacts as momentous as the transformative camp suggests — but likely over a longer timeframe, through transitional phases rather than overnight singularity. The Industrial Revolution was revolutionary, but its effects unfolded across generations.
We cannot perfectly predict whether AI progress will be fast or slow, benign or malignant. We can invest in being ready for various scenarios — building flexible governance mechanisms, fostering public understanding, encouraging innovation with built-in safeguards.
Replace ideological debate with empirical tracking. The framework introduces three conceptual metrics — Technological Saturation Rate, Displacement Velocity, and Governance Adaptability Index — as starting points for grounded monitoring.
Hannah Arendt's caution about losing distinctly human deliberation. Foucault's warning about surveillance and self-discipline. Byung-Chul Han's concern about "expelling the Other" through AI personalization. The nuanced framework integrates these humanistic perspectives alongside technical and economic analysis.
Three Metrics to Watch
Rather than endlessly debating "Will AI end the world or not?", the framework proposes watching empirical indicators that tell us which trajectory we're actually on.
How quickly a new AI technology permeates society — percentage of businesses or households using a particular AI application over time, compared to historical tech adoptions. A high TSR might indicate a more transformative moment; a low TSR indicates slower absorption giving more adjustment time.
Current signal: Rapid uptake in some areas (AI coding assistants) but bottlenecks persist (many enterprises still in pilot stages)
Speed at which AI automates human jobs or tasks — measured in jobs per year or percentage of workforce affected. Historically, economies can reabsorb ~5% job turnover per year without major disruption. A DV significantly above that could signal coming social stress.
Current signal: ~14% of workers report some AI-driven displacement; 1 in 4 CEOs expects at least 5% job cuts short-term
The speed and effectiveness with which governance can catch up to and guide a new AI capability — measured by the time lag between key AI milestones and corresponding regulatory/standardization actions. A high GAI means governance keeps pace; a low GAI means technology runs far ahead of oversight.
Current signal: 25 AI-related regulations enacted in 2023, versus nearly none in mid-2010s — improving, but gaps remain
A High-Speed Chase Between Technology and Adaptation
The future with AI likely won't resemble the most extreme visions — neither a paradise of omnipotent benevolent machines nor a wasteland of human obsolescence. It will be a human future, with all the messiness, creativity, conflict, and progress that implies.
Productivity gains, medical breakthroughs, personalized services — without upending the core of human values and agency. A new renaissance powered by human-AI collaboration.
Social fractures, concentration of power, or misuse of powerful AI tools — autonomous weapons, mass propaganda — destabilize society even absent rogue AI scenarios.
Not technology, but governance quality, democratic accountability, power distribution, and our collective will to ensure AI democratizes rather than concentrates its benefits.