Skip to content

The Jagged Frontier

The "jagged frontier" is a concept popularized by Ethan Mollick describing the uneven and unpredictable boundary of AI capabilities — where AI excels at some surprisingly difficult tasks while failing at seemingly simple ones.

Context & Background

Unlike a smooth capability curve where harder tasks are always less likely to succeed, AI capabilities form a jagged, irregular boundary. An LLM might write a sophisticated literature review but fail to correctly count the words in a sentence. It might generate complex statistical code but make basic arithmetic errors.

This jaggedness creates two distinct failure modes for users:

  1. Falling off the frontier: Attempting a task beyond AI capability and getting confidently wrong results
  2. Staying inside the frontier: Not using AI for tasks it could handle well, due to overly conservative assumptions

Key Perspectives

Multiple sources in this wiki reference the jagged frontier, particularly in discussions of AI adoption in academia. The concept explains why some researchers have transformative experiences with AI while others are disappointed — the outcome depends heavily on which tasks they try.

Practical Implications

  • Test before trusting: Don't assume AI can or can't do something — try it and verify
  • Expect inconsistency: AI may handle 9 out of 10 similar tasks well and fail on the 10th
  • Share frontier knowledge: Document which tasks work well and which don't for your specific research domain
  • Update regularly: The frontier shifts as models improve — re-test periodically
  • Verify at the edges: Apply extra scrutiny to tasks near the boundary of AI capability