Discussion about this post

User's avatar
Neural Foundry's avatar

The analogy to human developers nails it. We've always managed non-determinism through process, not by demanding perfect consistency from individuals. The part about AI being an amplifier cuts both ways though, and thats a fair warning. Teams without good verification habits will absolutely accelerate their way into worse outcomes. I've seen a few projects where people treat LLM outputs as gospel and skip testing becuase "the AI wrote it" which is the exact opposite of what should happen.

André Figueira's avatar

Strong argument, and I agree the non-determinism critique is largely a category error. But I'd push further than "verify outputs" as the answer.

You're right that we don't expect humans to be deterministic. We expect them to be contextually calibrated... A senior engineer produces better code because they carry richer context: domain knowledge, codebase familiarity, architectural constraints, team conventions.

The same applies to AI tools. The variability we observe is a symptom of underspecified context. Give an LLM a vague prompt, get variable outputs. Give it structured context that constrains the solution space (architectural decisions, coding standards, domain models, explicit boundaries) and outputs converge toward intended behavior.

I've been calling this "documentation as context as code." or "the .context method"; Treating structured markdown context as the primary control mechanism, collapsing the probability distribution at the input layer.

The verification you describe remains essential. There's also an upstream intervention most teams are missing entirely.

I've been documenting the approach at buildingbetter.tech if anyone wants to dig deeper.

I actually cover a lot of it here with a repo on how to implement this methodology https://buildingbetter.tech/p/the-context-method

No posts

Ready for more?