Some Thoughts on AI and Research
- Author/Source: Isaiah Andrews (MIT Economics)
-
Original: https://economics.mit.edu/sites/default/files/2026-04/IA%20AI%20note_1.pdf
-
Key Ideas
- Structures thinking about AI's impact on research around three cases: (1) models do all intellectual tasks better than humans (human capital investments irrelevant), (2) models excel at current strengths but remain below humans in other areas (invest in scarce skills models are bad at), (3) capabilities level off modestly above current levels (still a major disadvantage not to adopt).
- In Case 2, if models remain bad at taste, judgment, and problem selection, returns to developing those skills go up rather than down — complementing AI's strength at coding, writing, and proofs.
- Hiring, publication, and tenure standards are set in equilibrium: if the production possibility frontier expands and you don't keep up, you lose out in an absolute sense. "No one uses these models for research" is not an equilibrium.
- Three dimensions for learning AI tools: experimentation (returns are high because tools are new), verification (learning when and how to audit model output), and division of labor (AI changes optimal collaboration structure, not just solo work).
- PhD students are likely under-investing in exploring AI tools — given that a tenured professor uses them more aggressively, students should be experimenting more.
- GPT 5.4 Pro is "substantially better at convex analysis proofs than I am" — a concrete current-capability benchmark from a top econometrician.
-
Methodological research should address problems people currently have, not legacy problems from the pre-AI era.
-
Summary
Andrews, a leading econometrician at MIT, shares notes originally written for his PhD advisees on how AI should influence their research and skill acquisition decisions. He frames the problem as decision under uncertainty across three scenarios of AI capability progression, noting that Case 1 (full human replacement) is irrelevant for current planning since all investments have zero return in that state.
The core argument centers on Case 2: if models become great at coding, writing, and proofs but remain weak at taste, judgment, and problem selection, then the returns to developing those distinctly human skills increase. Andrews emphasizes that this is not a passive situation — equilibrium forces in the academic labor market mean that researchers who fail to adopt AI tools will be worse off than if the technology had never existed. He recommends three concrete actions: actively experiment with AI tools (paying for premium access), closely follow applied research seminars (at least weekly), and think carefully about how AI changes the division of labor in research collaborations.
The note is notable for its tone: measured, explicitly uncertain, and framed in the language of economics (production functions, equilibrium, returns to investment) rather than either techno-optimism or alarmism.
- Relevance to Economics Research
A rare perspective from a top-tier econometrician on AI's implications specifically for PhD students and junior researchers. The case-based framework is a useful mental model for any economist thinking about skill investment. The equilibrium argument — that standards will adjust upward regardless of individual choices — is particularly compelling for motivating adoption.
- Related Concepts
- concepts/human-capital
- concepts/ai-adoption-academia
- concepts/domain-expertise-vs-ai-skills
- concepts/research-productivity
-
Related Summaries
- summaries/academics-wake-up
- summaries/train-left-station
- summaries/research-in-time-of-ai
- summaries/can-ai-replace-researchers
- summaries/cc-series-21-faculty-adoption