Claude Code 32: A Modest Proposal for Editors
- Author/Source: Scott Cunningham (Baylor University), via Substack ("Causal Inference")
-
Original: https://causalinf.substack.com/p/claude-code-32-a-modest-proposal
-
Key Ideas
- The journal system obeys a stock-flow identity (Little's Law): stock of papers under review = inflow x average review duration. If inflow rises and outflow is bottlenecked by fixed referee capacity, the stock must grow -- this is an accounting identity, not a model.
- AI worsens an already deteriorating situation: Card and DellaVigna documented submissions doubling and acceptance rates halving from 1990-2012, and Conley and Onder found 90% of PhDs barely publish.
- The extensive margin may dominate: the 80% of PhD economists who currently produce near-zero papers could start submitting, dwarfing intensive-margin productivity gains.
- AI-assisted papers are harder to desk-reject because they are better written, better formatted, and more methodologically rigorous -- the left tail of quality disappears.
-
Four proposals for editors: (1) decide whether the journal's objective is AI prohibition or scientific innovation; (2) adopt LLM screening layers at the desk; (3) require code repositories at submission, not just acceptance; (4) raise submission fees with price discrimination.
-
Summary
Cunningham applies the Neal-Rick stock-flow framework from incarceration research to academic publishing. Just as longer prison sentences mechanically increased the prison population by reducing outflow, AI-driven increases in manuscript inflow must mechanically increase the stock of papers under review when referee capacity is fixed. This is not a prediction but an accounting identity. At 2x submissions, desk rejection must rise from 50% to 75% to keep the referee queue stable; at 5x, to 90%.
The article distinguishes two objective functions journals might adopt. The first is AI prohibition -- detect and reject AI-generated manuscripts. The second is to recognize that peer review exists to propel science forward, not to protect jobs, and therefore to invest in evaluation infrastructure that can handle more science regardless of how it was produced. Cunningham argues for the second and offers three concrete proposals: LLM screening layers that give editors a "pre-desk report" in five minutes instead of thirty; mandatory code repositories at submission (moving the AER's acceptance-stage requirement to the front of the pipeline); and carefully structured fee increases with price discrimination to fund editorial capacity while protecting junior and developing-country researchers.
He engages with Joshua Gans's response that the equilibrium will adjust, agreeing on the long run but emphasizing that transition dynamics are the problem. The binding constraint on science has shifted from production to evaluation, and editors do not have the luxury of waiting.
- Relevance to Economics Research
This article provides the most actionable framework in Cunningham's series for how the journal system should respond to AI-driven productivity gains. The stock-flow identity makes the argument irrefutable at the accounting level, and the policy proposals -- especially code repos at submission and LLM desk screening -- are immediately implementable. The piece is essential reading for editors, editorial boards, and anyone involved in the economics of scientific publishing.