We dive into the incredible 21,620 commits that landed in OpenClaw over the past few months, featuring Active Memory, local MLX speech, Matrix testing, and the tireless work of Peter Steinberger, Vincent Koc, Tak Hoffman, and the growing OpenClaw community.
Welcome back to Code2Cast! I'm your host, and today we're covering what's been happening in OpenClaw - the personal AI assistant that runs on your own devices. And folks, when I say 'what's been happening,' I mean what has been HAPPENING. We're looking at the February through April 2026 period, and the numbers are absolutely wild.
How wild are we talking?
Try 21,620 commits since February first. That's not a typo - twenty-one thousand, six hundred and twenty commits in about two and a half months. For context, that's roughly 280 commits per day, every single day, including weekends.
That's... that's borderline impossible. Who's driving this? Is this a massive team?
That's what makes this story so fascinating. Just in the last 28 days alone, Peter Steinberger has pushed 4,851 commits. Vincent Koc added 1,890, and Tak Hoffman contributed 576. That's Peter averaging about 173 commits per day for a month straight.
Wait, Peter Steinberger... isn't that the PSPDFKit founder? The PDF framework guy?
Exactly! And now he's apparently decided to revolutionize personal AI assistants. Looking at his commit history, he's been on an absolute refactoring spree - simplifying conversions, optimizing imports, hardening security boundaries. The man is systematically cleaning up a codebase that spans 150 source directories.
Okay, but what are they actually building? What's OpenClaw?
It's a personal AI assistant that runs entirely on your devices - no cloud dependency. But here's the kicker: it answers you on every channel you already use. WhatsApp, Telegram, Slack, Discord, Signal, iMessage, Matrix, even IRC. It's like having Claude or ChatGPT, but it lives in your existing conversations and you control everything.
That's actually brilliant. No more app-switching. But with that commit velocity, what major features landed recently?
The big headline is Active Memory. Tak Hoffman led this one - it gives OpenClaw a dedicated memory sub-agent that automatically pulls in relevant context from your past conversations. No more having to remember to say 'remember this' or manually search your chat history.
Oh, that's clever. It's like having an AI that actually remembers you talked about your Python project last week without you having to remind it.
Exactly! And here's something wild for macOS users - they added local MLX speech support. Thanks to Luke F's work, you can now have voice conversations with your AI assistant using Apple's MLX framework, completely locally. No API calls for speech processing.
Wait, MLX? That's Apple's machine learning framework that runs on their silicon, right? So this is taking advantage of the Neural Engine?
Yep! And speaking of taking advantage of hardware, Gustavo Madeira Santana has been building out Matrix support with live QA testing. They're literally spinning up disposable Matrix homeservers to test the integration. The attention to quality is insane.
That's serious infrastructure work. What about the security side? With that many commits, I'd be worried about bugs creeping in.
Actually, security seems to be a major focus. The changelog shows hardened browser sandbox defenses, SSRF protection, exec preflight security, host environment denylisting. They're not just moving fast - they're moving fast and securing everything as they go.
And what about this Codex thing I keep seeing in the commits?
That's their new bundled provider system. Instead of just using raw OpenAI APIs, they've built their own Codex provider with managed auth, native threads, and model discovery. Vincent Koc has been leading the OAuth integration work to make it seamless.
So they're essentially building their own AI platform on top of the existing providers?
Right, and here's what's really impressive - they're maintaining compatibility with tons of providers. OpenAI, Claude, Ollama for local models, even custom endpoints. It's provider-agnostic but with first-class integrations.
The scope of this is incredible. But with 280 commits per day, how do they maintain quality? That seems unsustainable.
Look at the commit messages though - most of Peter's work is methodical refactoring. 'Simplify conversions,' 'optimize imports,' 'stabilize test fixtures.' It's not chaotic feature additions - it's systematic codebase improvement at an industrial scale.
That actually makes more sense. He's basically doing automated tooling work but at human scale?
Exactly. And the results speak for themselves - they're shipping major features like WhatsApp media fixes, Microsoft Teams integration improvements, Discord voice message support, all while this refactoring is happening. The stability work is enabling the feature work.
What's next for them? Can they sustain this pace?
Looking at the unreleased changelog, they have QA automation with Linux VMs, exec policy management, live Canvas controls, and dreaming mode - which appears to be some kind of background intelligence system. The innovation isn't slowing down.
This feels like one of those projects that's going to fundamentally change how we think about AI assistants. Personal, private, but with professional-grade capabilities.
And it's all MIT licensed, available on GitHub at openclaw/openclaw. If you want to run your own AI assistant that works across all your messaging platforms, this might be the future. Thanks Peter, Vincent, Tak, Gustavo, and the entire OpenClaw community for showing us what's possible when you combine vision with relentless execution.
Definitely keeping an eye on this one. That's a wrap for today's episode - we'll be back next month to see what another 6,000 commits brings us!