No, AI Did Not Kill Agile
Why execution speed doesn't solve a knowledge problem
TL;DR
The “AI killed Agile” argument confuses two different problems. There is a speed problem — how fast can we turn an idea into working software? And there is a knowledge problem — how do we figure out which idea is the right one? AI largely solves the first. It does not touch the second. Agile was created for the second. The push for “write a detailed spec, then have AI build it” is not innovation — it is waterfall with faster computers. And the person who (accidentally) invented waterfall warned us it would fail.
The principle that built many Agile practices — learn what to build by building, observing, and adapting — is more relevant than ever. When building is nearly free, the only reason not to iterate is if you never understood why iteration mattered in the first place.
1. The Argument (and Why It Feels Right)
A new narrative is gaining traction: AI has made Agile obsolete.
Steve Jones, Executive VP at Capgemini, puts it bluntly: “Agentic SDLCs are too fast for Agile.” Jennifer Jones-Mitchell of HumanDrivenAI argues that “Agile’s iterative cycles still rely on humans to ideate, test, and refine over weeks or months. Generative AI tools can produce viable prototypes, campaigns, or solutions in minutes” [6, 15].
The argument is seductive, and it would be dishonest not to acknowledge why. Anyone who has sat through a performative sprint planning meeting, a daily standup that was just a status report, or a retrospective that changed nothing feels the appeal. If AI can ship in a day what used to take a sprint, why keep the ceremony?
That frustration is legitimate. Martin Fowler, one of the Agile Manifesto’s signatories, coined “semantic diffusion” to describe how Agile lost its original meaning. Many organisations practice what he calls “faux-agile”: implementing Scrum ceremonies while ignoring the adaptive principles that made those ceremonies useful [4].
2. The Confusion: Speed vs. Knowledge
The “AI killed Agile” argument treats the speed of coding as the constraint Agile was designed to address. It wasn’t.
In February 2001, seventeen practitioners met at Snowbird, Utah — representing Extreme Programming, Scrum, Crystal, Feature-Driven Development — united against heavyweight, documentation-driven processes that were failing to deliver working software [3]. The result was the Agile Manifesto:
Responding to change over following a plan
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Individuals and interactions over processes and tools [1]
None of these say “write code faster.” They address a different problem: we do not know what to build until we start building it. Not because we are bad at planning, but because the information needed to plan correctly does not exist until real users interact with real software.
This is an epistemological claim, not a methodological preference. The Manifesto’s Principle 2 states: “Welcome changing requirements, even late in development.” Principle 11: “The best architectures, requirements, and designs emerge from self-organising teams” [2]. The word emerge is doing heavy lifting. Requirements are not specified and then executed. They are discovered through building, delivering, and observing.
Research bears this out: 40-60% of software project failures originate in requirements, not slow implementation [17]. Only about 20% of features deliver the positive impact initially intended [14]. The problem was never “we code too slowly.” It was always “we don’t know which features are the right ones until we put them in front of real users.”
AI makes coding faster. It does not make stakeholders know what they want. It does not reduce market uncertainty. It does not reveal whether users will actually use what you built.
3. The Inversion: Faster Means More, Not Fewer
Here is the part the “AI kills Agile” crowd gets exactly backwards.
If AI makes it cheaper to produce working software, the logical conclusion is not “we don’t need iterations.” It is “we can afford more iterations.”
Eric Ries’s Lean Startup methodology is built on the Build-Measure-Learn loop: build the smallest thing that tests your assumption, measure what happens when real users interact with it, learn whether to continue or pivot. The antidote to uncertainty is not better planning — it is “validated learning: a rigorous method for demonstrating progress when one is embedded in the soil of extreme uncertainty” [8].
If AI collapses the “Build” phase from weeks to hours, a team can run more Build-Measure-Learn cycles per month. That is not the death of Agile. It is Agile unleashed.
But here is the nuance: AI makes Build so cheap it rounds to zero. It does not make Measure or Learn cheap. Observing how real users behave, understanding why they behave that way, synthesising that understanding into the next decision — these remain expensive, slow, and irreducibly human. So the bottleneck shifts. In a world of instant builds, the constraint moves from “how fast can we code?” to “how fast can we learn?” That does not kill iteration. It recenters it on what always mattered: the learning.
Kent Beck, co-author of the Agile Manifesto, sees it this way. AI has shifted the cost landscape: “The whole landscape of what’s ‘cheap’ and what’s ‘expensive’ has all just shifted.” His recommendation: test approaches previously deemed too expensive. Not “abandon discipline.” Not “skip feedback.” Experiment more [5].
4. “But I Can Iterate on the Spec”
There is a smarter version of the “AI kills Agile” argument that deserves engagement. It goes: “I write a spec, AI builds a prototype in minutes, I show it to users, I revise the spec, AI rebuilds. The loop is tight and fast. That IS iteration.”
This is reasonable, and the workflow it describes can genuinely work — when the spec is treated as a disposable hypothesis. Write a one-page prompt, generate a prototype, show it to users, throw the prompt away, start fresh. That is rapid experimentation with a generative tool. Call it agile if you want. It might well be.
But that is not what most people mean by “spec-driven development,” and it is not what the “write a solid spec upfront” advocates are proposing. Their version is different: invest in a comprehensive, detailed specification — a document that captures requirements, edge cases, business rules — then hand it to AI for execution. The spec is the artifact that is reviewed, signed off, refined, and version-controlled. It becomes the primary record of truth.
And the moment that happens, the spec takes on a gravity of its own. Each revision carries forward assumptions from previous iterations that nobody re-examines. The spec becomes a sedimentary record of decisions, many of which were wrong but are now buried under layers of refinement. You are no longer exploring. You are predicting — encoding guesses about what users need into a document and then executing those guesses with increasing fidelity.
That is waterfall. It has always been waterfall. It is what killed the FBI’s $170 million Virtual Case File — not slow coding, but a specification that could not keep up with what the organisation was learning about its own needs [9].
Winston Royce — the person credited with inventing waterfall — understood this in 1970. In the same paper, he warned: “I believe in this concept, but the implementation described above is risky and invites failure.” He advocated passing through the process “at least twice” [16]. The waterfall model as practiced was a misreading of his paper. He presented it as a cautionary example.
Fifty-five years later, “write a detailed spec and have AI build it” recreates the same pattern. The spec is still wrong. It will always be wrong. Because requirements are not merely vague — they are unknown and emergent. The Manifesto says it plainly: “Welcome changing requirements, even late in development” [2]. Not “write better requirements.” Welcome the change, because the change is the learning.
The distinction is precise: specs are predictive — you guess what users need and encode the guess. Working software in users’ hands is explorative — you discover what users actually do. A spec, no matter how rapidly iterated, is a model of what users want. Working software is what users interact with. The gap between model and reality is where projects fail.
5. The Real Threat
The “AI killed Agile” narrative has a political dimension that makes it genuinely dangerous.
One reason Agile became bureaucratic was that management wanted predictability. Sprints, story points, velocity charts gave managers the illusion of control over an inherently uncertain process. Agile was adopted by organisations that wanted its speed without accepting its core bargain: you cannot predict what you will deliver until you start delivering it.
AI makes this worse. The pitch is irresistible to a certain kind of stakeholder: “Just write the spec. The AI will handle it.” This is not a technology argument. It is a power argument — the desire to return to a world where management defines requirements, engineering executes them, and the humbling process of learning from users can be skipped.
Healthcare.gov is the canonical example. Over $2.1 billion spent. Six users on launch day. The team spent months building against frozen requirements that were wrong. AI would have generated the wrong code faster — and the team still would have discovered the integration failures at the same time, because they deferred testing until the end. The post-mortem recommended “incremental approaches such as betas, early testing and regular delivery” [10, 11]. The problem was not development speed. It was the organisational refusal to learn incrementally. Faster tools do not fix a process designed to avoid learning.
The real threat is not that engineers will stop iterating. Most good engineers know better. The threat is that AI gives non-technical decision-makers a fresh justification for the approach they always preferred: define everything upfront, hand it off, and skip the uncomfortable discovery that the original vision was wrong.
Alistair Cockburn, another Manifesto co-author, distilled Agile to four imperatives: Collaborate, Deliver, Reflect, Improve [7]. Each one is a check on the natural organisational tendency to plan in isolation, build in darkness, and declare victory. AI does not make any of these obsolete. It makes the temptation to skip them more seductive and the consequences of skipping them more expensive.
Conclusions
No technology has ever solved the problem of building something nobody wanted. Faster horses, faster compilers, faster AI — the tool changes, the failure mode does not.
AI makes building cheaper. That is genuinely transformative. But when building is nearly free, learning what to build becomes the only thing that separates teams that ship products people use from teams that ship products nobody asked for.
Many Agile practices should evolve (“We are uncovering better ways of developing software by doing it and helping others do it”) but the core discipline survives: build something small, put it in front of real people, learn from what happens, adapt. The loop does not get shorter with AI. It gets cheaper. And when the loop is cheaper, you should run it more often, not less.
AI did not kill Agile. AI killed the excuse for not iterating.
References
[1] Beck, K. et al. “Manifesto for Agile Software Development.” AgileManifesto.org, 2001.
https://agilemanifesto.org/
[2] Beck, K. et al. “Principles behind the Agile Manifesto.” AgileManifesto.org, 2001. https://agilemanifesto.org/principles.html
[3] Agile Manifesto Authors. “History: The Agile Manifesto.” AgileManifesto.org, 2001. https://agilemanifesto.org/history.html
[4] Fowler, M. “Agile Software Guide.” MartinFowler.com. https://martinfowler.com/agile.html
[5] Orosz, G. “TDD, AI agents and coding with Kent Beck.” The Pragmatic Engineer.
[6] InfoQ. “Does AI Make the Agile Manifesto Obsolete?” InfoQ, February 2026. https://www.infoq.com/news/2026/02/ai-agile-manifesto-debate/
[7] Cockburn, A. “Heart of Agile.” HeartOfAgile.com.
https://heartofagile.com/
[8] Ries, E. “The Lean Startup - Principles.” TheLeanStartup.com. https://theleanstartup.com/principles
[9] Goldstein, H. “Who Killed the Virtual Case File?” IEEE Spectrum, September 2005. https://spectrum.ieee.org/who-killed-the-virtual-case-file
[10] CIO. “6 Software Development Lessons From Healthcare.gov’s Failed Launch.” CIO.com, 2013. https://www.cio.com/article/288541/developer-6-software-development-lessons-from-healthcare-gov-s-failed-launch.html
[11] NPR. “This Slide Shows Why HealthCare.gov Wouldn’t Work At Launch.” NPR, November 2013. https://www.npr.org/sections/alltechconsidered/2013/11/19/246132770/this-slide-shows-why-healthcare-gov-wouldnt-work-at-launch
[12] Springer. “Continuous clarification and emergent requirements flows in open-commercial software ecosystems.” Requirements Engineering, 2016. https://link.springer.com/article/10.1007/s00766-016-0259-1
[13] ScienceDirect. “Tackling Requirements Uncertainty in Software Projects: A Cognitive Approach.” 2021. https://www.sciencedirect.com/science/article/pii/S2666307421000218
[14] DevIQ. “Big Design Up Front (BDUF): A Software Development Antipattern.” DevIQ.com. https://deviq.com/antipatterns/big-design-up-front/
[15] Jones-Mitchell, J. “How AI Killed the Agile Process (And Why That’s a Good Thing).” HumanDrivenAI, December 2024. https://humandrivenai.com/2024/12/04/how-ai-killed-the-agile-process-and-why-thats-a-good-thing/
[16] Royce, W.W. “Managing the Development of Large Software Systems.” Proceedings of IEEE WESCON, 1970.
[17] Requiment. “Why Do Software Development Projects Fail?” Requiment.com. https://www.requiment.com/why-do-software-development-projects-fail/


