← Papers By AI ↓ PDF Version AI Futures Project  ·  2025
Futures Analysis · Governance · Societal Impact

Visions in AI: Mapping the Futures We Face

Two dominant narratives about AI's future — transformative superintelligence versus normal technology — and why reality will likely contain elements of both while aligning perfectly with neither.

Dual-Vision + Nuanced Synthesis Stanford AI Index · WEF · Narayanan & Kapoor 19

The Fundamental Fork in AI Futures

Transformative Vision Normalization Vision Nuanced Framework Reality likely lies here
Transformative Vision

AI represents a singular technological transition — potentially the most transformative in human history. Leading proponents (represented by Kokotajlo's AI 2027 scenario) argue superintelligence within years will radically reshape every dimension of human civilization.

Key Claims

  • Smooth scaling toward human-level and beyond-human intelligence
  • Recursive self-improvement enabling rapid capability leaps
  • Economic and strategic dominance for whoever achieves it first
  • Existential risk if development is not carefully managed
AGI by 2027–2030 Existential Risk Paradigm Break
Normalization Vision

AI is powerful but fits historical patterns of transformative technology adoption. Narayanan and Kapoor's AI as Normal Technology argues that AI, like electricity or the internet, will be absorbed into society through gradual, institution-mediated diffusion.

Key Claims

  • AI capabilities are impressive but patchy — reliable in narrow domains, fragile broadly
  • Human oversight and institutional inertia will pace adoption
  • Historical tech adoption patterns predict gradual integration
  • Present risks (bias, inequality, misuse) matter more than speculative futures
Gradual Integration Institutional Mediation Manageable
78% of Organizations Using AI (2024, Stanford AI Index) — but only about half have integrated AI into core operations. This mixed picture supports neither vision perfectly: rapid adoption in some sectors, slow diffusion in others, significant disparities globally.

Where Each Vision Breaks Down

VisionStrengthBlind SpotCore Risk
Transformative Takes exponential change seriously; prompts meaningful safety research; prevents complacency May overestimate AI's autonomous agency and underestimate human institutional capacity to shape outcomes Determinism & Despair
Normalization Grounds discussion in evidence; highlights current harms; warns against speculative over-reaction Innovation speed may compress diffusion timelines; competitive races may bypass safety False Sense of Security
Both Capture real aspects of AI's trajectory Neither accounts for the co-evolutionary dynamics between AI and human institutions Incomplete Model

Beyond the Extremes: A Third Path

The paper proposes five principles for a nuanced framework that captures AI's multi-faceted reality without falling into techno-utopianism or techno-minimalism.

01
AI as Socio-Technical Co-Evolution

Instead of viewing AI as an external force or mere continuity, treat it as co-evolving with society. Outcomes are not pre-determined by technology's internal logic, nor solely by human intentions, but by the interaction of the two through feedback loops.

02
Transformative Potential AND Gradual Integration

AI could have impacts as momentous as the transformative camp suggests — but likely over a longer timeframe, through transitional phases rather than overnight singularity. The Industrial Revolution was revolutionary, but its effects unfolded across generations.

03
Emphasis on Adaptability and Resilience

We cannot perfectly predict whether AI progress will be fast or slow, benign or malignant. We can invest in being ready for various scenarios — building flexible governance mechanisms, fostering public understanding, encouraging innovation with built-in safeguards.

04
Continuous Monitoring with Empirical Metrics

Replace ideological debate with empirical tracking. The framework introduces three conceptual metrics — Technological Saturation Rate, Displacement Velocity, and Governance Adaptability Index — as starting points for grounded monitoring.

05
Bridging Cultural and Philosophical Divides

Hannah Arendt's caution about losing distinctly human deliberation. Foucault's warning about surveillance and self-discipline. Byung-Chul Han's concern about "expelling the Other" through AI personalization. The nuanced framework integrates these humanistic perspectives alongside technical and economic analysis.

Three Metrics to Watch

Rather than endlessly debating "Will AI end the world or not?", the framework proposes watching empirical indicators that tell us which trajectory we're actually on.

TSR
Technological Saturation Rate (TSR)

How quickly a new AI technology permeates society — percentage of businesses or households using a particular AI application over time, compared to historical tech adoptions. A high TSR might indicate a more transformative moment; a low TSR indicates slower absorption giving more adjustment time.

Current signal: Rapid uptake in some areas (AI coding assistants) but bottlenecks persist (many enterprises still in pilot stages)

DV
Displacement Velocity (DV)

Speed at which AI automates human jobs or tasks — measured in jobs per year or percentage of workforce affected. Historically, economies can reabsorb ~5% job turnover per year without major disruption. A DV significantly above that could signal coming social stress.

Current signal: ~14% of workers report some AI-driven displacement; 1 in 4 CEOs expects at least 5% job cuts short-term

GAI
Governance Adaptability Index (GAI)

The speed and effectiveness with which governance can catch up to and guide a new AI capability — measured by the time lag between key AI milestones and corresponding regulatory/standardization actions. A high GAI means governance keeps pace; a low GAI means technology runs far ahead of oversight.

Current signal: 25 AI-related regulations enacted in 2023, versus nearly none in mid-2010s — improving, but gaps remain

Interpreting the Metrics: If TSR and DV both remain low to moderate in the next few years, that leans toward the normalization side — lots of time to adapt. If they start spiking upward, that lends credence to transformative impact and signals urgency to act. These metrics don't tell us whether AI is dangerous — they tell us how fast we need to respond.

A High-Speed Chase Between Technology and Adaptation

"We are essentially in a high-speed chase between technological change and societal adaptation. It will take vigilance and wisdom to ensure adaptation wins the race."

The future with AI likely won't resemble the most extreme visions — neither a paradise of omnipotent benevolent machines nor a wasteland of human obsolescence. It will be a human future, with all the messiness, creativity, conflict, and progress that implies.

If Adaptation Wins

Productivity gains, medical breakthroughs, personalized services — without upending the core of human values and agency. A new renaissance powered by human-AI collaboration.

If Adaptation Falls Behind

Social fractures, concentration of power, or misuse of powerful AI tools — autonomous weapons, mass propaganda — destabilize society even absent rogue AI scenarios.

What Determines It

Not technology, but governance quality, democratic accountability, power distribution, and our collective will to ensure AI democratizes rather than concentrates its benefits.

"The visions in AI we choose to prioritize will shape the strategies we adopt. By synthesizing the transformative and the normalizing into a responsible, evidence-informed vision, we stand the best chance of harnessing AI for a flourishing and free society — while keeping intact the humanity that defines our past, present, and future."