Coding with LLMs¶
Coding with LLMs refers to the practice of using language models as interactive coding assistants — for writing, debugging, explaining, and refactoring research code.
Context & Background¶
For economists and social scientists, many of whom are self-taught programmers, LLM coding assistants represent a step change in productivity. These tools can:
- Write code from descriptions: "Create a function that computes Fama-MacBeth regressions"
- Debug errors: Paste an error message and get a fix
- Explain code: Understand unfamiliar codebases or languages
- Refactor: Modernize or restructure existing code
- Translate: Convert between programming languages
Practical Implications¶
- Describe what you want, not how: Let the AI choose the implementation approach
- Test everything: AI-generated code can have subtle bugs — write tests
- Learn from the code: Use AI-generated code as a learning opportunity, not a black box
- Use version control: Track AI-generated changes so you can revert if needed
Key Sources¶
- Using LLMs with Cursor: Modern AI for Economics Research
- Arin Dube Thread: LLMs Haven't Raised NBER Working Papers Above Trend
- Claude Code for Academics: An AI Agent for Empirical Research
- Research in the Time of AI
- OpenAI is throwing everything into building a fully automated researcher