Hulk smash verbose AI! Julius Brussee build caveman plugin. Make AI agents speak like smart caveman. Cut 75% tokens. Brain still big.
Welcome code2cast. Today explore caveman project by Julius Brussee. Jessica, what this thing do?
Chris, this brilliant! Project make AI agents talk like caveman. Drop fluff. Keep technical substance. Save 75% tokens but accuracy stay 100%.
Wait. We use caveman-speak to describe caveman-speak project? Meta-irony perfect!
Exactly! Instead 'I would be happy to explain React component re-rendering', caveman say 'New object ref each render. Inline object prop = new ref = re-render. Wrap in useMemo.'
Code work across Claude Code, Cursor, Windsurf, Cline, 40+ agents. Julius build hooks system in JavaScript. Auto-activate every session.
Architecture smart! SessionStart hook write flag file ~/.claude/.caveman-active. Statusline show [CAVEMAN:ULTRA] badge. Mode tracker watch for /caveman commands.
Best part - intensity levels! Lite keep grammar. Full drop articles. Ultra abbreviate everything. Even got wenyan mode - classical Chinese compression!
Evals prove it work. Three-arm test: baseline, terse, caveman. Honest delta show caveman beat 'Answer concisely' control. No cheating with fake compression.
Code clean. Skills/, rules/, hooks/ folders. CI sync workflow auto-deploy. Community contributions from Jayesh Patel, Jimmy CrakCrn. Open source done right.
Token savings massive. React re-render explanation: normal 69 tokens, caveman 19 tokens. Same fix, 75% less word. Brain still big!
Irony beautiful - we just explained token compression project using compressed speech. Meta-caveman achieved!
Why use many token when few do trick? Julius prove point. Caveman mode make AI faster, cheaper, more readable. One install command. That it.