Introducing claude-doctor, a brand-new tool by Aiden Bai that analyzes your Claude Code sessions to detect behavioral anti-patterns and automatically generate rules for better AI collaboration.
Welcome to Code2Cast! I'm Chris, and today we're exploring a fascinating new project called claude-doctor. Jessica, this tool does something I've never seen before - it analyzes your AI coding sessions.
Right? It's like having a therapist for your AI interactions! Created by Aiden Bai from Million.dev, claude-doctor examines your Claude Code transcripts and spots when things go wrong.
And this is brand new - Aiden literally built this entire sophisticated system in just two days, April 14th and 15th. What exactly is it looking for?
It tracks over a dozen behavioral patterns. Edit thrashing - when you modify the same file five or more times in one session. Error loops - three consecutive tool failures. Even sentiment analysis using custom tokens like 'undo', 'revert', and... well, some colorful language when you're frustrated.
I love that it has a dedicated frustrated-session test fixture with lines like 'none of this shit works, undo everything'. The technical implementation is clever - it uses the sentiment.js library but adds Claude-specific tokens that regular sentiment analysis would miss.
The constants file is fascinating. They've tuned thresholds for everything - thrashing starts at 5 edits, becomes critical at 20. Rapid corrections are flagged if you respond within 10 seconds of Claude's output. There's even detection for 'keep going' loops when you repeatedly tell Claude to continue.
What's really smart is the output. It doesn't just identify problems - it generates ready-to-paste rules for your CLAUDE.md file. Things like 'Read the full file before editing' or 'After 2 consecutive tool failures, stop and change your approach entirely.'
The architecture is solid TypeScript monorepo using pnpm workspaces and Turbo. Built with commander.js for the CLI, includes a nice spinner UI, and it's already published to npm. You can run 'npx claude-doctor' right now.
Looking at the detection algorithms, they track structural issues like excessive exploration - when your read-to-edit ratio exceeds 10:1. And behavioral patterns like correction-heavy sessions where 20% of your messages start with 'no', 'wrong', or 'wait'.
There's even interruption detection! It spots when you stop Claude mid-response and factors that into your session health score. The tool basically learns your pain points and teaches you how to collaborate better with AI.
What strikes me is how Aiden identified a real problem in the AI coding workflow. We're all learning how to work with these tools effectively, and claude-doctor provides data-driven insights into what actually works.
Exactly! Instead of guessing why your Claude sessions feel frustrating, you get concrete metrics. It's like performance profiling but for human-AI interaction patterns. The fact that someone built this comprehensive analysis tool in two days shows how fast this space is moving.
That's claude-doctor - turning your AI interaction frustrations into actionable rules. Check it out at github.com/aidenybai/claude-doctor or just run npx claude-doctor to analyze your own sessions. Thanks for joining us!