The Guy Who Built Claude Code Changed How I Work
I’ve been skeptical of AI tooling for the same reason I’ve been skeptical of every hype cycle in my 25-year career — most of them overpromise and underdeliver. But a few months ago, something cracked that skepticism in a way I didn’t expect. Not a product launch. Not a demo. A single thread from the person who actually built the tool.
The Skeptic’s Resume
I started as a .NET developer. Over the last decade I’ve shifted into leadership roles — architecture, design governance, system modernization roadmaps. But recently, through side projects and community work, I’ve found myself writing more code again. That’s where AI entered the picture, and where my skepticism got tested.
Here’s the thing: I’ve watched technologies come and go. OS-level virtualization. Containers and Kubernetes. Modern DevOps. CI/CD pipelines. Configuration management. Every single one was going to change the world and leave anyone behind who wasn’t using it. And yet — bare metal servers are still out there. Windows IIS boxes are still being manually configured. Manual deployments still happen every day.
Not all technologies fit all scenarios. That history is exactly why I don’t take “10x productivity” claims at face value. AI is no different. Every sales engineer right now is leading with AI as the tagline. The tools work — but someone with experience still needs to be reviewing what they produce. An AI agent will always give you an answer. But is it the right answer for the landscape it’s actually being built in? Is it done the way the enterprise expects it to be done?
I wasn’t buying the hype.
The Thread That Broke the Model
Then one morning in January, a colleague sent me this VentureBeat article about how Boris Cherny — the creator of Claude Code — actually uses his own tool.
Even clicking the link, I was skeptical. Another influencer showing off a polished demo, I figured.
It wasn’t that. What Cherny described was something I recognized immediately — not from coding, but from my years leading teams. He was running five parallel terminals, each with a separate agent, working on different tasks simultaneously. He wasn’t typing code. He was directing resources. Reviewing output. Steering priorities. It looked less like programming and more like a lead developer running a sprint — assigning stories, checking work, course-correcting when something drifted off track.
That reframing hit differently than anything else I’d seen in the AI space. This wasn’t “AI writes your code for you.” This was “AI changes what your job actually is.” The mental model shifted from write, test, ship, repeat to delegate, monitor, steer, verify. From serial execution to parallel orchestration.
And the fact that it came from the person who built the tool — not someone selling it, not an influencer chasing engagement — made it land in a way the hype never could.
The Experiment
It was compelling enough that I wanted to test it myself. But not with a throwaway proof of concept that would never see production. I wanted to see if Claude Code could build something I’d actually put my name on in a professional context.
That meant enterprise standards. Security features. Scalability concerns. The things that larger organizations take for granted but that take real discipline to implement well. I set ground rules: all changes through automation, deployments through CI/CD, environment resources managed through configuration management. Even the smaller details — service principals instead of secrets, proper identity management — because if I was going to evaluate this, I wanted it to be a real test.
I’m the type of person who needs firsthand experience before forming an opinion. I wasn’t going to write about AI-assisted development without actually doing it.
What I Found (and What’s Coming)
Fast forward to today. I’ve been using Claude Code for the last few months, and the results have genuinely surprised me.
Surprised me enough that I wanted to start writing about it. I come at this as a veteran developer and a dad who’s been building systems for 25 years. I can see where the technology is useful. I can also see where it will be exploited — and where over-reliance on it could stunt the growth of the next generation of engineers. The skills needed to review AI-generated code are the same skills that take years of hands-on experience to develop. If junior developers skip that phase, who’s doing the code reviews in five years?
These are the kinds of questions I want to dig into over the next few posts. Concrete examples. Real workflows. Honest assessments of where this tool delivers and where it falls short.
The Bottom Line
This tool will keep evolving. But I think it’s already past the point of being a novelty. It’s a real instrument in the toolkit — not a replacement for experience, but a very effective tool for people who have it.
I went from skeptic to cautiously convinced. Not because someone sold me on it, but because I ran the experiment myself. In the posts ahead, I’ll show you exactly what that experiment looked like — the wins, the failures, and the lessons I didn’t expect.
If you’ve had your own moment where AI tooling went from hype to genuinely useful, I’d love to hear about it.