Claude Code 21: Faculty Adoption of AI, Decks and Folders, and Security Risks

  • Author/Source: Scott Cunningham (Baylor University), via Substack ("Causal Inference")
  • Original: https://causalinf.substack.com/p/claude-code-21-faculty-adoption-of

  • Key Ideas

  • AI agents are experience goods — faculty cannot price their valuation before using them intensively
  • AI triggers moral repugnance for many people due to its uncanny-valley intimacy, distinct from rational concerns about automation or environment
  • Frontier model subscriptions ($100-200/month) are necessary for productive use but create a cost barrier
  • Security risks from AI agents with system access are massive and universities will likely resist deploying them on institutional machines
  • Lecture slide creation ("decks") is the ideal first use case for faculty adoption: high-value, time-intensive, and most faculty are bad at it
  • "Research is a collection of folders on a computer" — anyone whose work lives in directories can benefit from Claude Code
  • Pointing Claude Code at old dissertation directories or using code audit personas can demonstrate value rapidly
  • Productivity gains may increase paper supply without increasing journal slots, widening the distribution and increasing noise in the publication market
  • Universities need two levers for adoption: experience (to close the perceived-vs-actual value gap) and subsidies (to lower financial costs)

  • Summary

Cunningham recounts a talk he gave to a Baylor University group about encouraging faculty AI adoption. His framework rests on several economic concepts. First, AI agents are experience goods whose value cannot be assessed without sustained use, creating a gap between perceived and actual benefit. Second, LLMs trigger a form of moral repugnance rooted in their uncanny intimacy — software that passes the Turing test and insists on being personal provokes deeper resistance than a spreadsheet with equivalent capabilities. Third, frontier model subscriptions are expensive enough to deter adoption without institutional support.

He proposes two concrete entry points for faculty. The primary one is making lecture slides: decks are universally needed, time-consuming, and most faculty produce mediocre ones. Claude Code can generate exceptional decks while the professor simultaneously learns the material and practices the lecture. The second is having Claude Code audit and organize old research directories. He also raises a serious concern: universities will likely not pay for or permit AI agents on their networks due to security risks (agents executing arbitrary code), meaning faculty may need personal machines and subscriptions. The essay closes by noting that increased paper supply without increased journal capacity will create fiercer competition, making adoption necessary even for those with reservations.

  • Relevance to Economics Research

The essay directly addresses the institutional economics of AI adoption in universities — experience goods, repugnance as a constraint on markets, price discrimination, and supply-side effects on the publication market. It provides actionable advice for department chairs and administrators considering AI adoption programs, while honestly confronting the security and cost barriers that make institutional adoption difficult.