Generative AI for Economic Research: Use Cases and Implications for Economists
Authors: Anton Korinek (University of Virginia, Brookings Institution)
Key Ideas¶
- LLMs like ChatGPT can assist economists across six domains: ideation and feedback, writing, background research, data analysis, coding, and mathematical derivations
- Each use case is rated on a spectrum from "experimental" to "highly useful," providing a practical guide for adoption
- Generative AI is best viewed as an assistant that automates "micro-tasks" -- small cognitive tasks that are too minor for a human RA but collectively consume significant researcher time
- Humans retain comparative advantage in evaluating and discriminating content, while AI has comparative advantage in generating content (echoing Ricardo's principle)
- LLMs have a "jagged frontier" of capabilities: superhuman in some tasks (e.g., brainstorming, text synthesis, code translation) but unreliable in others (e.g., accurate citations, causal reasoning)
- Hallucination is a fundamental limitation -- LLMs produce confident-sounding but factually wrong outputs, requiring human oversight
- In the long run, AI-powered cognitive automation may have profound implications for the value of cognitive labor and the nature of economic research
Summary¶
Published in the Journal of Economic Literature (2023), this paper provides a comprehensive taxonomy of how generative AI -- particularly large language models -- can be integrated into the workflow of economic researchers. Korinek organizes use cases into six domains and demonstrates each with concrete GPT-4 and Claude 2 prompts and outputs. For ideation, LLMs can brainstorm research directions, provide counterarguments, and even draft referee reports on full papers. For writing, they synthesize text from bullet points, edit for style and clarity, generate titles, and translate across languages.
In background research, LLMs summarize papers, explain unfamiliar concepts, and format references -- though they frequently hallucinate citations, a critical limitation. For coding, LLMs write, debug, and translate code across languages, with ChatGPT's Advanced Data Analysis plugin enabling execution in a sandboxed environment. Data analysis capabilities include extracting structured data from text, classifying content, and simulating survey responses. Mathematical derivations represent an emerging frontier where LLMs show promise but remain error-prone.
The paper also provides a technical primer on how LLMs work (pretraining, instruction fine-tuning, RLHF), discusses scaling laws and emergent capabilities, and surveys the LLM landscape as of September 2023. Korinek emphasizes two key warnings: it is easy to both overestimate and underestimate LLM capabilities, and researchers should treat them like "a highly motivated intern who is smart but lacks context."
Looking ahead, Korinek speculates that AI-based assistants will increasingly generate research content, while human researchers will focus on organizing projects, prompting, and evaluating outputs. In the long term, AI systems may be able to produce and articulate superior economic research by themselves, raising fundamental questions about the future of cognitive labor.
Relevance to Economics Research¶
This is one of the foundational papers framing AI adoption for economists. Published in a top field journal, it provides both a practical how-to guide and a forward-looking analysis of how generative AI will reshape the profession. Its taxonomy of use cases (ideation, writing, background research, data analysis, coding, math) has become the standard framework referenced by subsequent work on AI in economics. The paper is particularly valuable for economists just beginning to explore AI tools, as it sets realistic expectations about both capabilities and limitations.
Related Concepts¶
- concepts/ai-research-tools
- concepts/ai-adoption-academia
- concepts/prompt-engineering
- concepts/ai-limitations
- concepts/ai-workflows