Claude Code Changed How I Work (Part 2)
- Author/Source: Scott Cunningham (Baylor), via Substack ("Causal Inference")
-
Original: https://causalinf.substack.com/p/claude-code-changed-how-i-work-part-b7f
-
Key Ideas
- AI has changed the production function for cognitive output from quasi-concave (requiring some human time) to linear isoquants where human and machine time are perfect substitutes: Q = aH + bM.
- With linear isoquants and cheap machine time, cost minimization yields a corner solution: H = 0. For the first time, cognitive output can be produced with zero human time.
- Human capital accumulation requires a chain: Time → Attention → Human Capital → Output. AI creates a bypass (AI → Output) that severs learning from production.
- The danger zone: when behavioral response to AI reduces human time below a threshold H-bar, output actually falls below pre-AI levels despite better technology.
-
Agentic AI may paradoxically be better for preserving attention than chatbot coding, because agents require supervision, direction, and verification — keeping humans cognitively engaged.
-
Summary
Cunningham develops a formal economic theory of attention and AI using production function analysis. Before AI, cognitive output required quasi-concave production functions — you always needed some human time. Post-AI, isoquants become linear (perfect substitutes), and since machine time is cheap (subscription pricing makes marginal cost near zero), rational agents choose corner solutions with zero human time. This creates a paradox: the optimal choice for any single task (minimize cost via AI) may maximize lifetime costs through human capital depreciation.
The core framework identifies a "productivity zone" (maintain human time, capture AI augmentation) versus a "danger zone" (reduce human time below threshold, lose more than AI gains). The chain Time → Attention → Human Capital → Output is severed by the AI bypass. Cunningham argues this is different from past automation (calculators, statistical software) because AI can handle entire cognitive workflows, not just routine subtasks. He previews Part 3's argument that agentic AI — requiring supervision and direction — may preserve attention better than copy-paste vibe coding.
- Relevance to Economics Research
This is an economist's theory of AI impact written in the language of economics (production functions, isoquants, cost minimization, human capital). Directly relevant to how researchers should think about their own AI adoption — the danger zone framework provides a decision model for when AI use becomes counterproductive. The attention chain (Time → Attention → Human Capital → Output) offers a framework for designing research workflows that preserve learning.
- Related Concepts
- concepts/cognitive-load-and-ai
- concepts/human-capital
- concepts/claude-code
- concepts/agentic-workflows
- concepts/hard-vs-easy-tasks
-
Related Summaries
- summaries/cc-changed-how-i-work-1
- summaries/cc-changed-how-i-work-3
- summaries/cc-changed-how-i-work-4
- summaries/cc-changed-how-i-work-5
- summaries/agentic-everything
- summaries/shape-of-ai
- summaries/ai-normal-technology