The Engineers Who Build With AI Will Eat the Ones Who Don't

Published on February 28, 2026

AI tooling has changed the economics of software delivery, but the conversation around it is full of noise and short on honesty. Controlled studies show experienced engineers actually get slower using AI on familiar work, yet teams embracing it are shipping significantly more. Both things are true, and understanding why matters. This blog shares what I'm seeing from daily use of Claude Code and the Anthropic APIs, including building an AI tool that replicates the output of a pizza-sized team. It covers the real cost dynamics, why TDD just lost its last excuse, what engineers across every discipline will actually do when AI handles the boilerplate, and why teams still focused on trophies instead of delivery are about to have an uncomfortable few years. The engineers who lean into this will thrive. The ones waiting for it to blow over are running out of time.

excerpt

The AI conversation in tech right now is mostly noise. Vendors dressing up autocomplete as revolution. LinkedIn thought leaders who haven't shipped code since 2019 posting about "the future of engineering." And then a quieter group of people who are actually using this stuff every day and seeing what it can and can't do.

I'm in that last group. I use Claude Code and the Anthropic APIs daily, on real production work. I'm building an AI builder tool that mimics the output of a pizza-sized team. I'm not theorising. I'm shipping it.

The Numbers Need Honesty

84% of developers now use or plan to use AI tools. Roughly 27% of production code is AI-authored. Self-reported productivity gains sit between 25-39%. Teams with high AI adoption are completing 21% more tasks and merging nearly double the pull requests.

Sounds great. Here's the problem.

METR ran a controlled study this year. 16 experienced open-source developers, working on their own repositories, randomly assigned tasks with and without AI tools. The developers using AI were 19% slower. Not faster. Slower. And afterwards, they estimated they'd been 20% faster. Completely wrong about their own performance.

That doesn't mean AI is useless. It means something more specific: AI doesn't speed up experienced engineers doing work they already know inside out. Where it makes a difference is on unfamiliar territory, boilerplate, and letting small teams punch above their weight. The real shift isn't "I wrote this function quicker." It's "I built this feature in a day that would've taken a sprint."

What It Actually Looks Like

Scaffolding a new service used to eat a full day. Project structure, config, CI pipelines, base tests, deployment manifests. With Claude Code I get to 80% in under two hours. The remaining 20% is where the interesting work lives anyway. AI gets you to the starting line faster. It doesn't run the race for you.

TDD is the big one for me. Most teams don't do test-driven development properly, and it's never been because they disagree with it. It's because writing tests first feels slow when there's a Product Manager breathing down your neck. AI changes that equation completely. I describe the behaviour I want, get a test suite generated in minutes, then write the implementation to make those tests pass. The red-green-refactor cycle stops feeling like a tax on delivery. The excuse of "we don't have time for tests" is dead. AI killed it.

Documentation, the thing everyone agreed mattered and nobody did, is now basically free to draft. ADRs, API specs, internal docs. You still need a human to check accuracy, but the grunt work is gone.

Debugging is 30-50% faster for routine investigation. Complex architectural decisions? No help at all. Zero. And that matters more than everything else combined.

The Cost Conversation

A mid-level engineer in the UK costs £80-100k fully loaded. A team of eight puts you at £700-800k a year before management overhead, tooling, or recruitment fees. Claude Code costs a few hundred quid a month. API calls for heavy production usage, a couple of thousand. The cost per feature delivered has fallen off a cliff.

I'm not saying you replace a team with an API bill. But teams using AI well deliver significantly more per engineer. Output per head goes up, cost per feature goes down, and the business gets more from the same investment in people.

The value of a good engineer goes up in this world, not down. The leverage from tooling means every decision, every architectural choice, every piece of judgement has a bigger blast radius. That's the cost conversation boards should be having.

Why I'm Building an AI Builder

I'm building a tool that replicates what a pizza-sized team produces. It's a force multiplier for the engineers I already have. I've spent my career arguing for flat structures, lean delivery, hiring only when it genuinely hurts. This is the logical next step.

At Aberdeen, where I'm building technology from scratch, this means making every engineer more effective and giving them better work to do. When AI eats the boilerplate, engineers spend time on what actually matters: domain modelling, understanding the investment management problem space. Architecture up front rather than rushing to code. Pair-reviewing AI output with scrutiny that would've been a luxury before. Talking to stakeholders, because when you're not writing CRUD endpoints for eight hours you've got time to understand what the business actually needs. And thinking about security, data integrity, and regulatory compliance in a way that matters hugely in financial services and that AI cannot be trusted to handle alone.

The shift is from "engineer as delivery machine" to "engineer as technical decision-maker with an incredibly fast delivery machine at their disposal." That's a better job. More interesting, more impactful, harder to automate away.

The tool plugs into the same Anthropic APIs I use through Claude Code. Multiple agents handle scaffolding, test generation, documentation, review. It's systematic application of AI to the 60-70% of engineering work that's repetitive and predictable. The stuff that has to happen but doesn't need a human doing it manually.

What AI Will Never Replace

Judgement under uncertainty. Engineering is about making decisions with incomplete information under real constraints. AI generates options. It can't weigh them against your CTO's pet project, your best engineer's parental leave, or the business needing revenue this quarter. The best technical answer is often not the right answer. That's human.

Taste. Knowing what not to build matters more than building quickly. I wrote about the overengineering trap before and AI makes it worse. You can now scaffold an entire microservices architecture for a problem that needed a database table and a cron job.

Context that doesn't live in code. Why was this system built this way? What can this team realistically maintain? AI can read your codebase. It has no idea what was happening in the business when someone made that weird architectural choice you're stuck with.

Accountability. When something breaks at 2am, a person owns it. Always will.

Problem reframing. The best engineers I've worked with at On the Beach, at Cinch, spent most of their time questioning whether the problem as stated was even the right problem. AI doesn't do that.

What Engineers Will Actually Do

All engineers. Not just the ones writing application code. Software, QA, cloud and platform, DevOps, SREs. If your role involves building or maintaining technology, this changes what you do.

QA feels this hardest. AI generates test cases, writes automation scripts, churns through regression suites. A lot of manual QA work becomes redundant fast. But quality engineering as a discipline gets more important, not less. Someone still needs to think about risk profiles, edge cases, what to test and why. The QA engineer who thinks critically about quality strategy has a great future. The one whose main skill is writing Selenium scripts is in trouble.

Cloud and platform engineers, same pattern. AI scaffolds Terraform modules and Kubernetes configs. But understanding why you'd choose one architecture over another, designing for failure, calculating cost implications over twelve months? Human work.

For juniors, this needs a proper conversation. They see the biggest speed gains from AI but need the most supervision. AI helps them produce more. It doesn't teach them why things work. There's a real risk of juniors learning to produce code without understanding it. The answer isn't banning AI. It's being deliberate about how they learn. Code reviews as teaching tools. Pairing sessions where a senior explains why the AI got something subtly wrong. The juniors who progress fastest will be the curious ones who look under the hood rather than accepting answers and moving on.

For seniors and leads, this should be exciting. Prototype faster, validate without waiting on sprint capacity, focus your team on hard problems instead of plumbing. Everything I've said about flat structures and lean teams becomes more achievable.

AI First

There's a pattern across the industry. Experienced developers who've decided AI is a fad they can wait out. They survived blockchain, the metaverse, "low code will replace developers." So they're carrying on as before.

It's not going to pass. The tools available right now fundamentally change how we deliver software. The engineers getting curious about this are building the most valuable skill set in the industry. It's a career accelerant for anyone willing to engage.

At Aberdeen, AI first is how we operate. Every piece of work starts with "how does AI help us do this better, faster, or cheaper?" If the answer is it doesn't, fine, we do it the old-fashioned way. But the default assumption is that AI is involved.

The best thing any engineering leader can do right now is invest in their team's AI literacy. Give people time to experiment. Run sessions where people share what's working. Create space where it's safe to say "I tried this with AI and it was rubbish" because that's how you learn. The engineers who lean in will surprise you.

The Teams That Don't Adapt

If you're running an engineering team in 2026 and you haven't properly integrated AI tooling, you're behind. Teams using AI well are shipping faster, iterating quicker, and taking on bigger problems with the same people. That gap compounds over time into something very hard to close.

I've spent my career at companies that understood survival. Northern companies, tight budgets, small teams. AI amplifies that advantage massively. It's what the lean startup philosophy always promised, except the tooling finally exists.

And then you've got companies like Holiday Extras, who I worked at years ago, still polishing their Sunday Times Best Companies to Work For trophy like it's the Champions League. Good for culture surveys, that. Doesn't ship software faster. You can have the happiest team in Kent and still get outdelivered by a squad in Manchester who've embraced AI and know how to use it. Awards on the shelf don't keep you competitive. Delivery does.

But here's the thing I've been saying since my first post: AI does not fix broken fundamentals. Slow CI, bottlenecked code review, tangled architecture. AI just makes those problems worse, faster. The research shows organisations with high AI adoption but poor practices see PR review times balloon by 91%. You cannot out-tool a broken process.

Where This Goes

That METR study used early 2025 tools. Claude 3.5 Sonnet, Cursor Pro. What's available now is already better. Claude Code understands your codebase, executes multi-step tasks, and fits into a real engineering workflow.

AI agents will handle full workflows for low-risk changes. Dependency bumps, routine features, maintenance that eats a third of team capacity. That's what my builder tool is aimed at.

Engineering roles will split. Some will specialise in AI-assisted rapid delivery. Others will go deep on architecture, reliability, security. Software, QA, cloud, platform, all of it. The generalist "bit of everything" engineer gives way to specialists who are either brilliant at directing AI or brilliant at the work AI can't do.

The gap between AI-enabled and non-enabled teams becomes a survival question. Companies that haven't moved will struggle to hire, struggle to ship, and bleed talent.

The engineers who thrive will be the ones who keep their core skills sharp while using AI as the biggest productivity lever we've ever had. We were never paid to write code or configure pipelines or run test suites. We were paid to solve problems.

And solving problems still takes a human who gives a damn.