The Argument You're Winning Is the Wrong One
Why "AI won't replace engineers" defends against an attack nobody is mounting
The debate about artificial intelligence and software engineering has calcified into two predictable camps. On one side, breathless prophets declare that programmers will be obsolete by next Tuesday. On the other, defensive engineers insist that AI will never replace them because software development requires creativity, judgment, and all manner of ineffable human qualities.
Both positions miss the point entirely. But the second one is particularly insidious because it sounds reasonable while defending against an attack that nobody serious is actually mounting.
When someone proclaims that “AI is not going to replace engineers,” they are constructing and demolishing a straw man. The interesting question was never about wholesale replacement. It never has been, for any technology, in any era. The real questions are far more uncomfortable, which is precisely why we avoid them.
Let me explain what I mean.
The replacement fantasy has ancient roots. When John McCarthy, Marvin Minsky, and their colleagues gathered at Dartmouth College in the summer of 1956, they proposed a study based on the conjecture that “every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.” They estimated the project would take about two months with ten researchers.
This spectacular miscalculation set the template for AI discourse that persists to this day. The field has oscillated between manic overconfidence and depressive retrenchment ever since. The “AI winters” of the 1970s and late 1980s followed periods of grandiose promises that could not be kept. Expert systems were going to revolutionise everything until they didn’t. Neural networks were abandoned as dead ends until they weren’t.
Each cycle produces the same rhetorical pattern. Enthusiasts make sweeping claims about imminent transformation. Sceptics point to obvious limitations. The technology fails to deliver on the hype. Everyone declares AI overblown. Then, quietly, the technology improves, finds its niche, and reshapes things in ways nobody predicted.
The sceptics are always right in the short term and always wrong in the long term. The enthusiasts are always wrong in the short term and often right about directions if not timelines.
This brings us to the philosophical heart of the matter. The Hungarian-British polymath Michael Polanyi articulated what we now call Polanyi’s Paradox: “We can know more than we can tell.” Much of human expertise consists of tacit knowledge that we cannot easily articulate or formalise. Hubert Dreyfus, the philosopher who spent decades critiquing AI, built his case on similar foundations. He argued that human intelligence is fundamentally embodied, contextual, and resistant to the kind of rule-based formalisation that AI required.
These arguments were powerful. They were also, as it turned out, arguments against a particular approach to AI rather than against AI itself. The shift from symbolic AI to machine learning, from explicit programming to pattern recognition from data, sidestepped many of Dreyfus’s objections. He was right that you cannot write down rules for recognising faces or understanding natural language. He did not anticipate that you wouldn’t need to.
John Searle’s famous Chinese Room argument makes a different point. A person who follows rules to manipulate Chinese symbols, without understanding Chinese, does not thereby come to understand Chinese. The room processes the symbols correctly but comprehension is absent. Searle meant this as a refutation of strong AI claims about machine consciousness and understanding.
But here’s the uncomfortable truth: much of what we call software engineering does not require understanding in Searle’s deep sense. It requires symbol manipulation according to patterns. When an AI generates code that compiles, runs correctly, and solves the stated problem, whether it “understands” what it’s doing becomes philosophically interesting but practically irrelevant.
The defender of human engineers says: but what about the hard parts? What about architecture, requirements analysis, understanding the business domain? What about debugging subtle issues and making judgment calls under uncertainty?
These are fair points. They are also, almost certainly, temporary ones. The history of automation is the history of tasks that could never be automated until they were.
Consider the game of Go. For decades, it served as the exemplar of human intuition triumphing over brute computation. Chess might fall to Deep Blue, but Go was different. The branching factor was too large. The positional judgment was too subtle. Human intuition was irreplaceable. In 2016, AlphaGo defeated Lee Sedol, and that particular argument died.
The pattern repeats. We identify what makes current AI inadequate. We treat those limitations as fundamental. We build our professional identity on being the ones who can do what machines cannot. Then the machines learn to do it anyway.
This is where the straw man becomes dangerous. When engineers say “AI won’t replace us,” they are defending against total elimination. But total elimination is not what’s coming. What’s coming is something more like what happened to bank tellers, travel agents, and countless other professions that still exist but employ far fewer people at different tasks than before.
The economists have a term for this: labour-saving technology. When a technology allows one worker to produce what previously required five, you don’t need to replace all the workers. You just need fewer of them. The work still exists. Human judgment is still involved somewhere in the process. But four out of five workers are doing something else, if they’re lucky, or nothing at all, if they’re not.
The augmentation narrative that many engineers embrace is a comforting frame that obscures this dynamic. Yes, AI augments human capability. That’s precisely the problem. If I can now accomplish with AI assistance what previously required a team, the team is no longer required. The fact that a human remains in the loop is cold comfort to the humans removed from it.
Karl Marx, whatever one thinks of his prescriptions, was remarkably prescient about machinery and labour. He observed that machines do not simply make work easier. They transform the relationship between labour and capital. They create what he called a “reserve army” of unemployed workers whose presence disciplines those still employed. Technology under capitalism serves capital’s interests, not labour’s.
One need not be a Marxist to recognise the dynamic. When AI enthusiasts at technology companies promise that AI will augment developers, they are making a statement about capability. When they predict that this will not affect employment, they are making a statement about economics that does not follow from the first claim and that their own incentives make them unqualified to assess.
The philosophical tradition offers another useful concept here: the Ship of Theseus. If you replace every plank of a ship, one at a time, is it still the same ship? Apply this to software engineering. If AI takes over code generation, then testing, then documentation, then debugging, then architecture, each time with a human “in the loop” providing approval, at what point has the engineer been replaced while still technically being present?
The answer is that replacement is not a binary event but a gradient. You can be replaced by degrees. You can be diminished incrementally. You can find yourself nominally in control while actually serving as a rubber stamp for decisions made elsewhere. The straw man of total replacement distracts from this more likely and more insidious outcome.
Martin Heidegger wrote about technology as a mode of “revealing,” a way of disclosing the world that also conceals other possibilities. When we frame the question as “will AI replace engineers,” we reveal a binary choice and conceal the spectrum of possibilities in between. The framing itself does ideological work, making certain outcomes thinkable and others invisible.
What should we actually be asking? Not whether AI will replace engineers, but how AI will transform engineering, who will benefit from that transformation, what skills will become more or less valuable, and how we might influence these trajectories rather than simply reacting to them.
The Luddites, contrary to popular caricature, were not opposed to technology as such. They were skilled textile workers who objected to the use of machinery to circumvent labour standards and degrade working conditions. Their complaint was not that machines were bad but that machines were being used badly. History has vindicated the direction of their concerns while condemning the futility of their methods.
We might learn from them. The question is not whether to resist AI, which is as futile now as smashing looms was then. The question is whether to participate in shaping how AI transforms our profession or to console ourselves with the fairy tale that it won’t.
When someone tells you that AI won’t replace engineers, ask them what they mean. If they mean that engineers will not vanish overnight, they are trivially correct. If they mean that the number, nature, and compensation of engineering roles will remain unchanged, they are almost certainly wrong. If they mean that they personally will be fine, they may be right or may be engaging in the same wishful thinking that has consoled displaced workers throughout history.
The honest position is uncertainty. We don’t know exactly how this will unfold. The capabilities are advancing faster than most predictions. The integration into actual workflows is slower than the hype suggests. The economic incentives are clear but the timeline is not.
What we can say is that treating “AI won’t replace engineers” as a meaningful contribution to this conversation is an intellectual abdication. It’s a thought-terminating cliché that protects us from harder questions. It is the equivalent of standing on the shore, watching the tide come in, and reassuring ourselves that water cannot replace sand.
The sand will still be there when the tide recedes. It will just be in a very different place.

