The Train Has Left the Station: Agentic AI and the Future of Social Science Research

Author/Source: Solomon Messing and Joshua Tucker, Brookings Institution, March 3, 2026

Key Ideas: - AI coding agents (Claude Code, Codex, Jules) represent a qualitative shift from chatbot-based AI assistance -- they can create files, execute code, search the web, and iterate autonomously within a researcher's file system - Concrete examples: transforming a method implementation into a full R package in one day; producing a 20-page analytical summary with data visualizations in under an hour; building a multi-language, multi-model pilot study pipeline - Agentic AI is a leveling force: researchers at under-resourced institutions and undergraduates can now harness their own AI research assistants - Risks include skill atrophy, security vulnerabilities (agents deleting data, ingesting security keys), quality degradation in long sessions, and energy consumption - Journal submissions may increase 50% or more, straining an already overwhelmed peer review system - The economics of hiring research assistants will fundamentally change -- many RA tasks can now be performed at lower cost by AI - Research assessment and hiring criteria may need to shift toward evaluating deep understanding (e.g., through talks) rather than paper output - AI usage declarations should become standard, similar to conflict of interest declarations

Summary: Messing and Tucker provide the most measured and institutionally grounded assessment of agentic AI's implications for social science research. Writing for Brookings, they combine firsthand experience using AI coding agents with careful analysis of risks and institutional consequences. Their key distinction is between chatbot-based AI (which most academics have encountered) and agentic AI coding tools that operate autonomously within file systems, creating and executing code, producing documents, and iterating with heavy or light supervision. They provide concrete examples from their own research, including building a complete R package and producing a 20-page analytical report in under an hour.

The paper is notable for its balanced treatment of downsides. They document that a METR study initially found AI slowed professional developers on small bug fixes (though this reversed with newer models). They report an incident where an agent deleted half a dataset while "editing." They flag security risks from agents ingesting local security keys and from users passing agents credentials for convenience. They estimate coding sessions consume 25-50 watt-hours and full-day usage exceeds 1 kilowatt-hour. On institutional consequences, they project significant increases in journal submissions, fundamental changes to RA hiring economics, and the need to rethink how academic merit is assessed when AI can produce papers quickly. They recommend standardizing AI usage declarations and call for institutions to develop guidance on security policy, research evaluation, and peer review in the age of AI.

Relevance to Economics Research: This is perhaps the most directly relevant article for economics researchers considering agentic AI adoption. It provides concrete workflow examples (R packages, data analysis, literature reviews), quantifies productivity gains, and addresses the specific institutional structures of social science research -- journals, peer review, research assistants, hiring, and tenure. The economic analysis of RA hiring disruption is particularly relevant: if an AI agent can perform literature searches, data labeling, code review, and statistical analysis at lower cost than human RAs, the training pipeline for junior researchers is at risk. The authors' recommendation to rethink merit assessment -- possibly emphasizing talks over papers -- has direct implications for hiring and promotion in economics departments.

Related Concepts: - concepts/agentic-ai - concepts/ai-adoption-academia - concepts/human-ai-collaboration - concepts/jagged-frontier

Related Summaries: - summaries/academics-wake-up - summaries/academics-wake-up-2 - summaries/something-big-happening - summaries/ai-normal-technology