What if AI helps us build anti-fragile systems
On non-determinism, intent, and software that stays alive
The sheer power of AI tools, even with all their non-determinism, feels like a massive opportunity. Not just to build systems with better quality, but to build systems that are actually anti-fragile.
A lot of the worries we have today about AI agents and reliability seem very influenced by the world we come from. Computing has been dominated by determinism for decades. Same input, same output, otherwise something is wrong. That way of thinking made sense, but it also really limits how we imagine what comes next. We are notoriously bad at this as humans. We almost always picture the future as a slightly tweaked version of the present.
There are plenty of famous examples. The 640K memory quote often linked to Bill Gates. The idea that nobody would ever need a telephone. Someone important confidently explaining that television would never really take off. Whether the quotes are perfectly accurate or not almost does not matter. The pattern does. We mistake current constraints for universal truths.
If we are willing to accept that the same behaviour can come from different internal paths, even with the same input, things start to look different. Variability stops being purely a liability. With the right adversarial constraints, it can become a strength. Think of approaches like the nWave framework that my friends Alessandro and Michele have been developing, where challenge and review are deliberately built into the system.
Set up like that, a system can absorb internal deviations and also cope with the world changing underneath it. That already feels like more than just being robust.
This is where AI feels most interesting to me. Not as something we use once to generate code or configuration, but as something we inject into the system itself. Almost like the lymph of a living organism. It keeps flowing, reacting to events, while staying within some expected boundaries of behaviour.
In that world, every system becomes an expert system about its own domain and about the environment it operates in. We set intent and constraints, but the product stays alive. It adapts, reacts to what actually happens, and feeds back to us what no longer makes sense, including where our original assumptions or specs are wrong.
This might all be a bit wild. It definitely needs more thinking and probably a lot of pushback. But it feels like a direction worth exploring, especially if we stop judging AI purely through a deterministic lens and start designing for change rather than trying to eliminate it.

