Foundations To Outcomes
A page refresh that took ten weeks in the last iteration took under a week this quarter. Same team. Same headcount. Different way of working.
I lead a team of 7 engineers, a data engineer, and a DevOps specialist at The Collecting Group. In Q1 we shifted AI impact across our codebase from 56% to 84%, with a human-to-AI code multiplier rising from 1.6x to 7.5x. Commit volume was up 66%. We started new repositories AI-native from day one. No new hires.
The numbers are the easy part to point at. Most of the work that produced them was invisible.
What Actually Changed
The shift wasn’t about adopting tools. It was about changing how the team works.
AI became a core part of the workflow, not an add-on. Shared skills, ways of working, prompts, and guardrails meant improvements weren’t isolated. They compounded. As individuals learned, the system improved. As the system improved, the codebase did too. Better outputs, cleaner patterns, and stronger defaults started to reinforce each other.
Expectations evolved alongside this, reflected in the career framework and reinforced through knowledge sharing, coaching and jams. Work moved towards smaller bets, faster feedback loops, and clearer ownership.
These compounding loops, shared skills, and continuous improvements became a crucial part of how work flows. Combined with a product-led roadmap, it shapes what actually gets shipped.
What The Numbers Are Telling Us
- Active repos up from 22 to 32
- Total commits up 66%
- Lines added up 150% (lines removed up 79%)
- AI impact up 28 percentage points to 84%
- AI share of new code up to 88.2%
- Human-to-AI multiplier up to 7.5x
Underneath that:
- Human lines added down 23%
- AI lines added up 250%
This is the shift from autocomplete to something closer to orchestration. The surface area of output has changed. The limiting factor has moved.
What Is Still Hard
Reviewing AI-assisted output at the volume it now arrives in is the next bottleneck.
We have more code. We have it faster. Confidence has to keep up and quality has to not suffer.
Right now that means improving how we review, not just how we generate:
- Better patterns for chunking changes
- Stronger automated checks and guardrails
- Clear ownership on what “done” means
There is also friction in the flow between idea and merged code. Some of it is process. Some of it is tooling. Some of it is habit.
We are also carrying too much work in progress. It spreads attention, slows feedback, and hides what is actually close to shipping. Tightening WIP and leaning into DRIs matters more at this stage. Clear ownership for everything being created, and a single accountable person seeing it through to production, is how we retain quality and ensure things actually go live.
“Stop starting. Start finishing” matters even more now that our capability to produce has increased. Thanks Vaz 😀
Q2: Show Me The Money!!!
Q1 was about laying foundations. Q2 is focusing on outcomes the business can feel.
Our focus shifts from ways of working to throughput and feature delivery:
- How we measure and improve throughput of meaningful features as a system
- What is the impact on revenue, conversion, and user experience
- Where are the new bottlenecks now that generation is no longer the constraint
Two themes for the quarter:
1. Reviewing code at scale
Reducing friction in review without lowering quality. More automation where it helps. Better defaults. Fewer large diffs.
2. End-to-end flow
From idea to production. Tightening the loop so the gains in generation translate into shipped value.
A Note On Mindset
The harder shift isn’t the tools. It’s how the we think about our work.
Moving from writing everything by hand to orchestrating output changes what it means to be an engineer. Judgement, taste, and accountability matter more, not less.
I’ve enjoyed reading Gene Kim and Steve Yegge book Vibe Coding, and their FAAFO acronym stuck with me. There’s a useful tension in there between speed and consequence. It’s a good lens for this phase.

Closing Thoughts
The numbers in our codebase moved fast. The more interesting change is in how we work.
Q1 proved the model. Q2 is about proving the impact.
I’m curious what other engineering leaders are seeing as they move from autocomplete to more agentic workflows. Where are your bottlenecks shifting to?