Why thinking still matters for AI enabled engineers

We’re in one of the most exciting times to be an engineer. Whilst over the years there’s been many advances in the languages and frameworks we use, AI powered coding agents are arguably the fastest accelerators we’ve seen, and can be a real superpower for engineers who use them wisely.

In this article I want to focus on one particular aspect of “using AI tools wisely”, namely “thinking”. I also provide a set of “mental guardrails” for using AI tools – something that all engineers should keep in mind.

AI is an amplifier, not a replacement for engineers

AI is undoubtedly a great force multiplier, allowing us to accelerate so many aspects of software engineering, not least to:

  • Iterate multiple concurrent POCs to evaluate a suitable approach to use
  • Accelerate research on problem domains
  • Generate code against existing patterns/conventions, that would otherwise be repetitive/boilerplate in nature without needing to build bespoke tools.
  • Understand existing and/or problematic code bases rapidly or aiding in getting to the root cause of run time issues quicker.
  • Building new tools for ideas to solve problems, that might otherwise have just been a “one day I’ll get around to that” project.

There are a huge amount of other use cases that engineers should be actively embracing these tools for. But it is not a replacement for strong engineering knowledge or the thinking that engineers do when determining how best to apply their knowledge and experience to solve a problem.

Why “AI Slop” Exists

We hear lots about AI slop, mountains of code being generated that’s apparently a ticking time bomb waiting to blow. That could be unreviewed, untested, context free code that “technically works” but in fact erodes architecture, safety, and maintainability

So if AI used well following age old engineering first principles is so good, why do we hear so much about the “slop”?

AI tools clearly have the ability to greatly amplify the outputs of any engineer using them. As the old saying goes you put garbage in you get garbage out. Is the real reason for the so called slop just inexperienced/poor engineers finding that they can now do a lot more quicker, or is it engineers who do know better just being lazy and sacrificing speed for quality?

In reality it’s a mix of this, and you can argue which is worse…

Blinded by Magic

Much like when humankind first discovered fire, often major innovations can seem like magic to those who bear witness but don’t understand it.

Like giving power tools to someone who’s never built a chair, you don’t get more craftsmanship, although you might get lucky, really you just end up with more broken wood faster.

Article content

We certainly saw a lot of this 2-3 years ago when the use of LLMs for engineering was first being talked of as negating engineers. Although this hasn’t happened it still seemingly keeps being repeated every other week.

Whilst we’ve come a long way very quickly and now have very impressive tools, they are not what Fred Brooks would call a silver bullet. It’s still very easy to be blinded by the magic, forget best practices and ultimately just trust that the results will be good if you keep persevering.

Never forget Best Practices

I’m sure we’ve all got war stories or have heard tales of engineers who were so confident in their abilities they just pushed some code that for example compiled without testing to production. We all laugh at this as it just seems so ridiculously wrong that we’d never do it ourselves?

In the past these became one off tales and examples to junior engineers of what not to do.

The real challenge is that AI has a very positive bias and has been shown to be quite sycophantic to the user, so it’ll tell you what a great job it’s done and how your ideas are good. Sometimes this’ll happen even after it’s previously told you to do the complete opposite!

Ultimately the speed of AI tools means these mistakes due to poor practice, left unchecked start to multiply at a much greater rate than without such tools.

Prompting without thinking isn’t engineering, it’s just autocomplete with a Git repo.

The Cognitive Cost of Speed

The real danger is that engineers stop thinking about what they actually want to build and put their trust more in AI because it’s quicker.

The human brain is quite wonderful in the way it works. I like the analogy from Pragmatic Thinking and Learning by Andy Hunt to explain how we actually think about problems we need to solve.

We have a small, conscious ‘stack machine’ for active reasoning and a powerful, background ‘DSP’ for pattern matching and insight. We don’t consciously control this part. You’ve probably experienced sometimes when you’re driving a route you drive regularly, or otherwise doing a mundane task, it almost feels like you’re on autopilot and then something you’ve been puzzling over for ages comes to mind with an answer that is perfect.

Good results come to those who wait as the saying goes!

Hard problems need both sides of our brain which ultimately needs time. If every hard problem is immediately offloaded to an AI, we never give our own DSP time to work. We trade deep understanding for fast, shallow answers

Thinking as a Competitive Advantage

So why does all this matter for engineers using AI tools?

If we offload all our thinking to all AI agent in the pursuit of speed, do we actually get good results or just results that the AI statistically outputs based on your inputs? And if we’re getting answers quickly do we actually think in depth about the problem or just accept what the AI gives us as good enough?

When we give AI a strong plan with clear context and guidelines we know the results can be many factors better in terms of the quality and robustness of the code generated. Plans take time and it’s important we take the time to think about where AI can reliably accelerate our work.

AI doesn’t remove the need for experience, it exposes the absence of it.

For example an engineer who defines architecture, constraints, and other considered aspects of not only what a system needs to do but how is going to have better results than; the Engineer who pastes a vague prompt like “build a microservice for payments” and ships whatever comes out.

It sounds like an exaggerated example but this is the reality when people believe the hype.

Taking a more structured approach, in the form of creating a plan with AI, reviewing it in a feedback loop to finesse it and add more detail before asking for code to be generated is something that is known to bring much better outcomes. Ultimately a process like this makes time and space for thinking, rather than trying to get immediate results.

The Real Superpower is Thinking

In a world where anyone can generate thousands of lines of code in minutes, the differentiator is no longer speed. It’s intent. It’s judgment. It’s the ability to pause, reason, and decide what should be built before asking a machine to help build it.

AI doesn’t remove the need for experience; it exposes the absence of it. When engineers rush, outsource their thinking, and accept plausible-looking answers at face value, AI simply accelerates the production of technical debt. But when experienced engineers take the time to think before prompting, AI becomes a genuine superpower, amplifying clarity not randomness.

AI coding agents are not the end of engineering thinking, they are the ultimate test of it.

Going back to the start of this article, AI is a powerful amplifier for software engineering. It’s clear that the best outputs come to those who can articulate a plan, challenge its outputs, and recognise when something “working” is not the same as something being right.

The engineers who slow down to think will build systems that last and benefit most from the amplification.

The ones who don’t will just ship more mistakes faster.

In the age of AI, thinking isn’t optional. It’s the competitive advantage.

A Short AI Usage Checklist for Senior Engineers

Use this as a set of “mental guardrails” when working with AI coding agents:

🧭 Before You Ask AI Anything

  • Be clear on what problem you’re solving, not just what code you want. Write the design in human language first. If you can’t explain it, you’re not ready to prompt.
  • Identify non-negotiables: architecture, performance, security, testing, domain rules.
  • Decide where AI should accelerate work, not where it should decide things for you.
  • Apply engineering best practices to prompting. Reuse proven patterns, providing sufficient context, to create consistency and enable structured review and refinement.

🧠 While Using AI

  • Treat AI like a junior engineer with infinite energy and zero context.
  • Provide structure: plans, constraints, examples, and explicit expectations.
  • Watch for sycophancy, AI agreeing with you doesn’t mean it’s correct.
  • If you’re iterating endlessly, stop. Step back. Re-think the problem.

🔍 After AI Produces Output

  • Review the code as critically as you would any human-written code—often more so.

Ask:

  • Does this align with our architecture?
  • Does it respect domain invariants?
  • Would I be comfortable maintaining this in two years?
  • Test assumptions, not just functionality.
  • it to explain trade-offs, not just produce code.

🛑 Red Flags to Watch For

  • Accepting output because it “looks right” or compiles.
  • Letting AI define structure instead of enforcing one.
  • Optimising for speed at the expense of understanding.
  • Replacing design discussions with prompt iterations.
  • You can’t explain why this works, only that it does. A good engineer should always be curious and want to understand why things work a particular way

Used thoughtfully, AI doesn’t make engineers obsolete – it makes good engineers exceptional. I’ve always been clear that senior engineers are the ones who don’t write the most code, they’re the ones who know what code needs to be written.

By Alexis Shirtliff, Head of Engineering | One Beyond