Feedback Machines: Writing and editing research papers with generative AI
- Author/Source: Claes Backman (Substack, 2026-01-22)
- Original: https://claesbackman.substack.com/p/feedback-machines-writing-and-editing
Key Ideas¶
- AI tools like Cursor and Claude Code are transforming how researchers get feedback on their papers, supplementing the traditionally slow process of supervisor reviews, conferences, and referee reports.
- Providing AI with project-specific context (a CLAUDE.md file, writing guides, project descriptions) dramatically improves feedback quality.
- A structured workflow progresses from high-level referee-style reports, to section-by-section evaluation, to unsupported claims and robustness checks, to consistency checks, to final polishing.
- AI can be asked to evaluate journal fit and suggest target journals or changes needed for a specific journal.
- The main limitation is that AI can make weak arguments sound coherent, potentially smoothing over conceptual problems rather than fixing them.
- AI suggestions should be treated as candidate edits, not always-accept edits; hallucinated results and fabricated explanations are common.
Summary¶
Backman describes how he has integrated Cursor and Claude Code into his paper-writing workflow, primarily as feedback tools rather than idea generators or coders. The core insight is that by giving AI tools additional context -- such as a project guide, writing style preferences, or domain-specific instructions stored in a CLAUDE.md file -- researchers can get feedback that is more aligned with their goals and the norms of their field.
His concrete workflow starts with asking for a full referee-style report on a completed draft, then moves to section-by-section evaluation for inconsistencies, identification of unsupported claims and missing robustness checks, notation and reference consistency checks across the entire project, and finally professional editing for clarity and tone. He also uses AI to evaluate journal fit and to handle ancillary tasks like writing README files and figure alt-text.
Backman is candid about limitations. He notes that AI-generated reports tend to be overly negative (even a published RFS paper received a "major revisions" recommendation) and that AI can make text sound polished even when the underlying argument is weak. He emphasizes the importance of exercising independent judgment and not accepting edits wholesale, drawing a parallel to the general skill of learning to ignore some feedback.
Relevance to Economics Research¶
This article provides a practical, step-by-step guide for economists and finance researchers who want to use AI to improve their papers before submission. The workflow is directly applicable to the typical economics research pipeline -- from first draft through journal targeting -- and addresses real pain points like the difficulty of getting timely, substantive feedback and maintaining consistency across long projects with multiple tables and appendices.