Academics Need to Wake Up on AI, Part II

Author/Source: Alexander Kustov, University of Notre Dame, Popular by Design (Substack), March 2026

Key Ideas: - Qualitative research and original data collection (fieldwork, interviews, archival work) will increase in relative value as AI automates quantitative and conceptual tasks - The "jagged frontier" of AI capabilities explains the polarization of academic opinion: critics point to troughs, enthusiasts point to peaks - User expertise still vastly determines AI output quality -- using agentic AI is not copy-pasting from a chatbot; it requires detailed instruction files and iterative workflows - Publication lag makes academic AI capability critiques obsolete by the time they appear: citing 2025 studies about GPT-4 to argue against AI use in 2026 is like citing flip-phone studies to argue against smartphones - Most papers are already primarily read by AI, not humans -- academics should accept LLMs as their primary audience and publish in machine-readable formats (.md) - AI exposes what was already broken in academia: the replication crisis, citation padding, and production of papers nobody reads were pre-existing conditions - Skill atrophy is a real risk, especially for students who have not yet internalized the cognitive skills AI might short-circuit - AI writing detectors do not work -- the original AI-generated post passed every major detector as "100% human" - Disclosure norms create perverse incentives: honest users get punished while dishonest users face no consequences

Summary: Writing in response to over a thousand reactions (many hostile) to his original post, Kustov reflects on what he got right, what he should have done differently, and what the backlash revealed. He acknowledges three mistakes: revealing the AI authorship as a "cheeky follow-up" rather than being upfront, failing to clarify he meant AI is better than professors globally (not just at elite US institutions), and not catching minor stylistic errors in the AI-generated text. He points readers to the Messing and Tucker Brookings piece as a more measured version of many of the same arguments.

The substantive additions are significant. Kustov concedes that qualitative research involving fieldwork, interviews, and trust-building with communities cannot be automated, and argues this work will rise in relative value. He introduces the concept of "publication lag" as a structural problem: academic publishing timelines are fundamentally incompatible with AI's rate of improvement, meaning peer-reviewed critiques of AI capabilities are outdated before they appear. He takes skill atrophy seriously as a risk for students and trainees, calling it an urgent curriculum problem. On disclosure, he argues from bitter personal experience -- receiving threats and calls to be fired after disclosing AI use -- that mandatory AI acknowledgment norms will select for dishonesty given current professional incentives. His core position remains: "what matters is whether the work is correct and valuable, not whether a human or a machine typed the sentences."

Relevance to Economics Research: This piece deepens the analysis of institutional disruption in academic economics. The publication lag problem is particularly relevant: empirical papers studying AI capabilities face a fundamental methodological challenge when the technology evolves faster than the review cycle. The skill atrophy concern maps directly onto graduate training in economics -- if students use AI to write code and run regressions without understanding the underlying methods, the long-term consequences for the profession could be severe. The observation that qualitative and field-based research gains relative value offers strategic guidance for researchers choosing between methodological approaches.

Related Concepts: - concepts/ai-adoption-academia - concepts/jagged-frontier - concepts/human-ai-collaboration - concepts/agentic-ai

Related Summaries: - summaries/academics-wake-up - summaries/train-left-station - summaries/shape-of-ai - summaries/what-ai-got-wrong