Chatbots Done Right - From Casual Use to Genuine Productivity
Author/Source: Chris Blattman, claudeblattman.com
Key Ideas¶
- Always use the strongest model available (Opus over Sonnet for Claude, o1-pro/o3 over GPT-4o for ChatGPT); the default model is the cheapest, not the best
- Provide context about yourself at the start of every conversation: role, field, current projects, and preferences dramatically improve response quality
- Use saved/persistent context (Claude Projects, ChatGPT Custom Instructions) so every conversation starts calibrated to your situation
- Structure requests with four components: Context, Task, Format, and Constraints for dramatically better results
- Use multi-turn conversations rather than one-shot prompts; build iteratively within a session for compounding quality improvements
- Know chatbot limitations: unreliable for factual claims, math, current events, and specific professional advice; genuinely good at drafting, synthesis, brainstorming, explaining, and format conversion
- The right mental model: "a brilliant, well-read colleague with no memory and a tendency to make things up"
- Build a personal library of effective prompts for recurring tasks
Summary¶
This article presents the core techniques that separate casual chatbot users from people who extract genuine productivity value from AI. The author argues that most people use chatbots like search engines (one question, one answer, move on), which is the least valuable approach. Instead, the article advocates for a systematic method: always selecting the strongest available model, frontloading context about the user's role and needs, structuring requests with explicit context/task/format/constraints, and using iterative multi-turn conversations rather than one-shots.
The article provides concrete examples drawn from academic work, including writing paper abstracts, getting feedback on draft introductions, and refining writing voice through progressive conversation. It is notably honest about chatbot limitations, warning that factual claims, citations, arithmetic, and professional advice should always be verified externally. The piece concludes by encouraging users to save effective prompts as a personal library, which becomes the foundation for more automated workflows later. A companion page on prompt engineering formalizes these ideas into a six-section framework.
Relevance to Economics Research¶
The techniques described here are immediately applicable to academic economics workflows: drafting abstracts, structuring referee responses, synthesizing literature, brainstorming identification strategies, and converting data analysis notes into narrative text. The emphasis on verifying factual claims and citations is especially important for researchers, as chatbots confidently hallucinate references. The saved-context feature (Claude Projects) maps well to maintaining separate research project contexts, and the prompt library concept aligns with building reusable templates for recurring tasks like recommendation letters, grant proposals, and paper reviews.