The Shape of AI: Jaggedness, Bottlenecks and Salients

Author/Source: Ethan Mollick, One Useful Thing (Substack), December 2025

Key Ideas: - AI capabilities form a "jagged frontier" -- superhuman at some tasks, embarrassingly bad at others, in ways that defy human intuition about task difficulty - Jaggedness creates bottlenecks: even very smart AI cannot automate entire workflows if it fails at a single critical sub-task - Bottlenecks are not only about AI ability; institutional processes (FDA approval, clinical trials, human review) impose their own speed limits - "Reverse salients" (from historian Thomas Hughes) describe the single technical or social problem holding back an entire system from leaping forward - When a reverse salient is resolved, entire categories of capability can suddenly unlock -- as happened with Google's Nano Banana Pro image generation enabling high-quality slide decks - Memory and real-time learning remain key weak spots with little improvement, potentially preventing full human-task overlap - We should "watch the bottlenecks, not the benchmarks" to understand where AI is headed

Summary: Mollick extends his earlier "jagged frontier" concept (from the 2023 Harvard/Wharton study on AI and consulting) to explain why AI progress feels both astonishing and disappointing simultaneously. AI can be superhuman at differential medical diagnosis and competition-level mathematics while failing at simple visual puzzles or running a vending machine. This jaggedness means that even as the AI capability frontier expands rapidly, it may never fully overlap with the set of human tasks -- leaving persistent complementarities between humans and AI.

The key analytical contribution is connecting jaggedness to bottlenecks and reverse salients. A system is only as functional as its weakest component. Even when AI handles 99% of a workflow brilliantly, a 1% failure (like not being able to email authors for unpublished data in a systematic review) prevents full automation. But bottlenecks also create the illusion of permanent limitation: when AI labs identify and fix a reverse salient, the entire system jumps forward. Mollick illustrates this with Google's Nano Banana Pro, whose image generation quality suddenly unlocked the ability to create high-quality presentations as images rather than code -- a capability that had been bottlenecked by poor image generation for years. The pattern: "don't watch the benchmarks, watch the bottlenecks. When one breaks, everything behind it comes flooding through."

Relevance to Economics Research: The jagged frontier framework is essential for economists studying AI's labor market effects and productivity impacts. It explains why aggregate measures of AI capability are poor predictors of job displacement -- what matters is whether AI can handle all the sub-tasks in a particular job, not just most of them. The bottleneck concept maps directly onto task-based models of labor markets (Acemoglu and Autor). For researchers using AI in their own workflow, understanding jaggedness helps set realistic expectations: AI may handle data analysis and literature synthesis superbly while failing at tasks requiring institutional knowledge or novel data collection.

Related Concepts: - concepts/jagged-frontier - concepts/human-ai-collaboration - concepts/ai-adoption-academia - concepts/agentic-ai

Related Summaries: - summaries/ai-normal-technology - summaries/academics-wake-up - summaries/train-left-station - summaries/what-ai-got-wrong