A fireside follow-up, and the bits I wanted to underline
I recently did a fireside chat follow-up to my AOTB 2025 AMA talk. It was meant to be simple. A few questions, a few stories, a bit of reality from the last 6 to 12 months.
What I didn’t expect was how clearly a few themes came through once I heard myself say them out loud.
Three, in particular:
- Compounding is the whole game
- “Back yourself” is not motivational fluff. It is a strategy
- Curiosity and experimentation are the mindset I’d encourage anyone to adopt in response to the pace we are living through
Compounding is not a buzzword. It is a workflow
The most useful shift I’ve found with AI is treating it like a system that can improve, not just a tool that can output code.
If you use AI like a slot machine, you get random results. Sometimes brilliant. Sometimes terrifying. Often inconsistent. It might feel fast, but it does not get better.
If you use AI like a feedback loop, you start compounding.
What does that look like in practice?
- Build tests early so you can see what broke and what didn’t
- When the AI goes off-script, don’t just fix the output. Fix the system
- Turn mistakes into rules. Turn preferences into conventions
- Put that learning back into the codebase, so the team benefits next time
The point is simple. You want to hit the same problem once.
That is the compounding loop.
Every correction becomes future leverage. Every “we don’t do it like that here” becomes a reusable constraint. Every time you define what good looks like, you reduce the chance of drifting into chaos.
Practically, this is where tools like Cursor and Claude Code shine. They allow you to capture how your team works and turn that into rules the AI follows. If Cursor generates something slightly off, you do not just fix the code and move on. You update the rule set. Maybe it used the wrong pattern, ignored a test convention, or chose Python when your stack is JavaScript. That becomes a rule so the next task starts closer to the right answer.
Over time you build a library of how your team actually works. Coding standards. Architectural preferences. API integration patterns. Even things like how you structure tests or handle error states. The AI starts every task with that context instead of starting from scratch.
The most powerful moment is right at the end of a piece of work. Once the feature ships or the PR is finished, take a minute to ask what the AI got wrong, what you corrected, and what you would want it to do differently next time. Then update the rules.
That small habit compounds. The next feature is easier. The next engineer benefits from the same context. And the system gradually gets better at producing the kind of software your team actually wants to ship.
This is also why “taste” matters more than ever. AI can generate. It can even generate well. But your judgement is what decides if it is correct, maintainable, safe, and aligned with how your team builds software.
This is also why “taste” matters more than ever. AI can generate. It can even generate well. But your judgement is what decides if it is correct, maintainable, safe, and aligned with how your team builds software.
Defining “good” is now a leadership responsibility
There’s a moment in this AI wave where the conversation shifted.
We used to talk about hallucinations. Now we talk about bugs.
That change is subtle but important. Bugs are closer to the truth. They look plausible. They pass a casual glance. They sometimes pass a shallow review.
Which means the job now is not just “did the AI write code”. The job is “did we define what good looks like, and did we build a process that reliably produces it”.
For me, that has meant zooming out from output and into bottlenecks:
- If we can produce code faster, how do we review it fast enough
- If we can ship more, how do we keep production safe
- If we can change more frequently, how does testing scale with that cadence
- If we speed up one part of the system, what breaks downstream
This is why I keep coming back to guardrails, not hype.
I’m not interested in turning a team into PR robots. I am interested in building a system that lets people move faster without burning out, and without gambling the business on hope.
What I meant by “back yourself”
When I said “back yourself”, I wasn’t trying to be inspirational. I was trying to describe a practical stance in a very noisy market.
Right now, you can find someone confidently saying:
- AI will replace engineering
- AI is a fad
- This tool is the only tool
- Everyone else is doing it wrong
- You’re already behind
Most of it is either exaggerated, premature, or context-free.
So “back yourself” means:
- Don’t outsource your thinking to headlines
- Don’t bet your career on certainty that nobody actually has
- Keep an alternative strategy
- Build enough hands-on experience that you can form your own view
If you are feeling anxious about any of this, the best antidote I know is to play with the tools.
Not to become a prompt wizard. Not to keep up with every release. Just to get skin in the game.
Pick one. Use it for real work. Learn what it is good at and what it lies about. Then you can pivot.
That is what “back yourself” looks like. It is confidence built on evidence, not vibes.
Curiosity beats prediction. Experimentation beats debate
I keep finding the same thing. I cannot predict this stuff. Even when I think I can.
You can be right in July and outdated by August.
That is not failure. That is just the environment we find ourselves in.
So the only approach that makes sense is:
- Be curious
- Run small experiments
- Keep what works
- Bake it into your system
- Share it with your team
- Repeat
This is also why I like the idea of keeping an ongoing conversation, not just a once-a-year conference slot. The gap between “submitted talk” and “delivered talk” is now large enough for the world to change twice.
The only way to stay grounded is to keep learning in public, and keep comparing notes.
Closing thought
At the start I mentioned three themes that kept coming up in the fireside chat:
Compounding is what turns AI from an interesting tool into real leverage.
Back yourself by building your own evidence.
Then stay curious and keep experimenting, because the ground is moving whether we like it or not.
If you’re using AI in your team, I’d love to hear what your compounding loop looks like. What rules are you baking in? Where are you seeing leverage, and where are you seeing risk?