AI as Normal Technology

Author/Source: Arvind Narayanan & Sayash Kapoor, Knight First Amendment Institute at Columbia University

Key Ideas: - AI should be understood as a "normal" general-purpose technology akin to electricity or the internet, not as an autonomous superintelligent entity - There is a critical distinction between AI methods (invention), AI applications (innovation), and AI adoption (diffusion), each operating on different timescales - Diffusion of AI in safety-critical and consequential domains lags decades behind technical capabilities due to institutional, organizational, and regulatory speed limits - Benchmarks and exam performance have poor "construct validity" for predicting real-world economic impact; they measure method progress, not application usefulness - Human power comes from tool use, not raw intelligence; "superintelligence" is an incoherent concept when properly unpacked into capability versus power - Control of AI comes in many flavors beyond alignment or human-in-the-loop: auditing, monitoring, fail-safes, circuit breakers, least privilege, and formal verification - Policy should prioritize resilience and reducing uncertainty over nonproliferation, which creates dangerous single points of failure and concentration of power

Summary: Narayanan and Kapoor present a comprehensive worldview for understanding AI as "normal technology" -- not to minimize its impact, but to ground analysis in historical patterns of technology adoption and diffusion. They argue that transformative economic effects will unfold over decades, not years, because the gap between AI methods (what models can do on benchmarks) and real-world applications (what actually gets deployed) is mediated by slow-moving organizational change, safety requirements, and institutional adaptation. They draw extensively on the history of electrification, which took 40 years to show productivity gains because factories had to be physically redesigned around new production logic.

The paper reframes the "superintelligence" debate by distinguishing intelligence from power. Humans are already "superintelligent" compared to pre-technological ancestors, not because of biological differences, but because of accumulated tools and knowledge. AI is another such tool. The authors predict that AI will not meaningfully outperform human+AI teams in areas like forecasting and persuasion, and that games provide misleading intuitions about real-world AI capability because speed advantages rarely matter outside constrained domains.

On risks, they argue that catastrophic misalignment is a "speculative risk" (epistemic uncertainty about whether the true risk is zero) rather than a confirmed threat. They advocate for downstream defenses against misuse -- strengthening email filtering, cybersecurity infrastructure, and biosecurity procurement screening -- rather than trying to make models incapable of misuse. For policy, they recommend resilience over nonproliferation, arguing that concentrating AI power in few hands creates brittleness and single points of failure, while widespread availability of AI strengthens defenses.

The paper concludes that AI's real systemic risks -- inequality, labor displacement, erosion of trust, democratic backsliding -- are continuations of problems created by capitalism and previous technologies, amplified by AI. These "normal" risks deserve more attention than speculative existential scenarios.

Relevance to Economics Research: This paper is directly relevant to economics researchers thinking about AI's macroeconomic impact, labor market effects, and the pace of technology adoption. Its framework -- distinguishing methods, applications, and diffusion -- provides a useful analytical lens for empirical research on AI productivity effects. The construct validity critique of benchmarks matters for anyone trying to forecast AI's economic impact from capability measurements. The diffusion theory perspective (drawing on Paul David's work on electrification) offers testable predictions about how long AI's productivity effects will take to materialize across sectors.

Related Concepts: - concepts/ai-adoption-academia - concepts/jagged-frontier - concepts/agentic-ai - concepts/human-ai-collaboration

Related Summaries: - summaries/bitter-lesson - summaries/shape-of-ai - summaries/train-left-station - summaries/something-big-happening