Claude Code 21: Attention, Human Verification and Congestion

  • Author/Source: Scott Cunningham (Baylor University), via Substack ("Causal Inference")
  • Original: https://causalinf.substack.com/p/claude-code-21-attention-human-verification

  • Key Ideas

  • Productivity gains from Claude Code create "stock pollutants" — excess outputs, disorganized files, duplicated work — that accumulate and generate rising marginal costs
  • The costs of AI-assisted productivity may be convex (superlinear), growing faster than the gains from additional output
  • Three isoquant scenarios for human time use: (1) maintain same time on research with AI augmentation (best), (2) reduce time but gain knowledge (acceptable), (3) extreme automation where the researcher becomes a "button pusher" (worst)
  • If human time and machine time are near-perfect substitutes for cognitive tasks, rational actors will drift toward less human time, reducing attention and learning
  • Karpathy's insight: the new skill is human verification, not prompting or "vibe coding"
  • Three critical skills going forward: human verification (zero errors), high attention (resist automation of judgment), and congestion management (handle the byproducts of faster work)
  • The "rhetoric of decks" workflow creates its own congestion: too many slides, hard-coded outputs, duplicate copies, out-of-order progress

  • Summary

Cunningham examines the dark side of dramatically increased productivity with Claude Code. While working at 5-10x speed on a revise-and-resubmit, he found that the externalities of faster work — disorganized files, duplicated outputs, hard-coded values in decks, branching code pipelines — were accumulating as "stock pollutants" that made it increasingly difficult to locate and verify results. He frames this through production theory: if productivity gains face diminishing returns while the costs of managing byproducts are convex, the net benefit of additional AI-assisted work may plateau or even decline.

The essay draws on isoquant analysis to model three possible equilibria for human time use in AI-augmented research. The optimal path maintains the same human time commitment, redirecting it toward higher-quality engagement. But the gravitational pull of AI substitution tempts researchers to reduce their time, which erodes attention and learning. Cunningham references Karpathy's claim that human verification is the new critical skill and adds two more: maintaining high attention to resist drifting into "button pusher" mode, and managing the congestion that faster work inevitably creates. The piece is unusually honest about the costs of AI adoption, serving as a counterweight to pure enthusiasm.

  • Relevance to Economics Research

The essay applies production function theory (isoquants, diminishing returns, convex costs) to the practical experience of using AI in empirical research. It identifies a genuine and under-discussed problem: that AI-enhanced productivity generates its own coordination and verification costs that can erode the net gains. For researchers managing complex projects with multiple specifications, robustness checks, and revision cycles, the congestion problem is immediately recognizable.