Skip to content

How to Encourage Adoption of AI Among Faculty: Experience, Cost, and the Killer Use Case

Authors: Scott Cunningham (Ben H. Williams Professor of Economics, Baylor University; Visiting Professor, Harvard University)

Key Ideas

  • AI is an "experience good" -- faculty cannot envision its value without direct hands-on use; no demo, brochure, or testimonial closes the gap between perceived and actual value
  • Three barriers to faculty AI adoption: (1) the experience-good problem, (2) AI repugnance (it feels personal and uncomfortable in ways static software never did), and (3) financial cost (frontier models cost $200/month, universities license none of them)
  • There are only two levers for driving adoption: providing direct experience and subsidizing costs through institutional licenses
  • The key strategic insight: identify 1-2 "killer use cases" that are universal, high-value, time-intensive, poorly done by most faculty, and visibly transformative when done well
  • Two proposed killer use cases: (1) organizing messy research directories with Claude Code and (2) generating beautiful lecture slides -- both tasks every faculty member already does
  • The "Referee 2" protocol from MixtapeTools: a systematic five-audit framework (code audit, cross-language replication, directory audit, output automation, econometrics audit) that uses AI agents to generate formal referee reports on your own work
  • Cross-language replication catches errors that single-language review cannot -- hallucination patterns differ across R, Stata, and Python, so requiring 6-decimal agreement across all three is a powerful verification strategy
  • Once faculty adopt AI for one compelling use case, they naturally discover others -- research coding, grant proposals, paper writing, course design follow organically

Summary

Presented to the Baylor University AI Task Force in February 2026, this talk by Scott Cunningham lays out a strategic framework for encouraging AI adoption among university faculty. Cunningham argues that most institutional AI strategies fail because they treat AI like normal software, offering seminars and vague encouragement. But AI is fundamentally different: it is an experience good whose value is impossible to appreciate without direct, hands-on use. Faculty who have never used frontier AI agents map the unknown onto familiar but misleading analogies ("another software tool," "a fancy toy," "better Google"), massively underestimating what agents can do.

Beyond the experience-good problem, Cunningham identifies two additional barriers. First, AI triggers a unique repugnance -- unlike PowerPoint or Excel, AI talks to you, feels personal, and taps into genuine ethical concerns that create enough friction for many faculty to avoid it entirely. Second, frontier models (GPT-5.3, Claude Opus 4.6, Gemini) cost $200/month for transformative capability, $20/month for limited utility, and the free tier "hamstrings researchers to the point of uselessness." Universities license none of these, so the cost falls entirely on individual faculty.

The constructive proposal centers on two "killer use cases" that meet every adoption criterion. The first is pointing Claude Code at a messy research directory -- the chaotic Dropbox folders with "Paper draft v3 FINAL (2).docx" and 24 subdirectories -- and having it organize, document, and create a replicable pipeline without deleting anything. The second is generating beautiful lecture slides, a task that is universal, time-intensive, and visibly improved by AI. Cunningham details the "Referee 2" audit protocol, which uses AI to perform five systematic audits of empirical research, including cross-language replication where R, Stata, and Python must match to 6 decimal places. The concrete recommendations to the task force: provide frontier model licenses, run hands-on workshops (not seminars), and start with one use case per person.

Relevance to Economics Research

This presentation is essential reading for anyone involved in institutional AI strategy at a university. Cunningham's framework -- experience good + cost barrier + killer use case -- explains why most top-down AI adoption efforts fail and offers a concrete alternative. The "Referee 2" protocol is directly applicable to empirical economics research, providing a systematic quality assurance framework that leverages AI's strengths while preserving human judgment. The cross-language replication insight has implications for research credibility and the replication crisis in economics.