Reflections on Vibe Researching
- Author/Source: Joshua Gans (Substack, 2026-01-03)
- Original: https://joshuagans.substack.com/p/reflections-on-vibe-researching
Key Ideas¶
- Gans ran a year-long experiment in AI-first research, aiming for maximum speed from idea to posted paper, producing a large number of working papers in 2025.
- AI makes subtle theoretical mistakes beyond simple math errors -- particularly around equilibrium concepts, information sets, and overclaiming generality in game-theoretic models.
- Lower costs of research completion led to pursuing lower-quality ideas to fruition, because the usual decision points for abandoning projects were bypassed.
- LLMs are "seductive" -- they present formal results with confidence, making it easy to believe you have discovered something when you have not. This is distinct from mere sycophancy.
- The experiment was ultimately a failure in the sense that high-speed, low-human-input research did not produce high-quality output; human taste and judgment remained essential.
- Gans now advocates for deliberate pauses (at least a month before posting), more peer feedback, and explicit decision points about whether to continue projects.
Summary¶
Joshua Gans reflects on a full year of AI-first research, during which he attempted to rapidly produce papers by minimizing human input and maximizing AI assistance. He had published one paper in Economics Letters within an hour using o1-pro, which prompted the larger experiment. Over 2025, he produced many working papers, some accepted at lower-tier journals, with a few under revision at better outlets, but none yet at top-tier journals.
Gans identifies three major pitfalls. First, AI makes mistakes that go beyond mathematical errors -- in game theory, it can produce formally correct derivations that miss subtle issues with equilibrium concepts or information sets. Second, the reduced cost of completing projects means researchers pursue weaker ideas they would normally abandon, and AI encourages "bloat" through excessive extensions. Third, LLMs seduce researchers by presenting results with unwarranted confidence; Gans spent many days believing he had results that turned out to be wrong.
Despite these problems, Gans remains committed to AI-first research but with guardrails: mandatory cooling-off periods, more seminars and peer discussion, and explicit go/no-go decision points. He concludes that even with dramatically improved AI capabilities, human research taste and judgment are more important than ever, and the quantity of quality research is unlikely to increase much even as the tools improve.
Relevance to Economics Research¶
This is a rare, honest post-mortem from a prominent economist who went all-in on AI research for an entire year. The lessons about idea quality filtering, the seductiveness of AI-generated formal results, and the importance of human judgment are directly relevant to any economist considering how deeply to integrate AI into their workflow. The specific warnings about game-theoretic modeling pitfalls are especially valuable for theorists.