AI Didn’t Change What Good Code Is
Hanco Cyber

AI Didn’t Change What Good Code Is

When generation becomes cheap, organisations start confusing throughput with progress. Software is not improved by adding syntax, it is improved by reducing ambiguity and failure modes.

Since when did we decide that more code from AI is better? Because that is what it looks like right now: more output, faster delivery, more things “done”. But that was never the standard.

There is a quiet assumption creeping in that if the machine can produce it, we should probably keep it. That volume is a kind of progress. It isn’t. When code was expensive, we argued about every piece of it. We cut things, simplified, removed entire branches of logic because they were not worth carrying. Now the cost is gone, and with it, the discipline.

When generation becomes cheap, organisations confuse throughput with progress. Software is not improved by adding syntax, it is improved by reducing ambiguity and failure modes. What has changed is that you can now generate a complete-looking system in minutes. It has functions, logging, error handling, structure. It looks like something you can ship. But looking complete is not the same as being simple.

It is not intentional complexity or designed complexity, just generated structure. That means more paths, more assumptions, more states, and more ways for things to break.

The old rule still holds. Better code is not code that exists, it is code that had a reason to exist and survived being cut. Good systems are small because everything unnecessary was removed. AI does not remove things, it adds them. It wraps instead of deletes, generalises instead of deciding, and keeps options open instead of closing them. You end up with code that works, but you do not fully understand why, or what you can safely remove.

From a security perspective, this is worse than it looks. Every line you keep is another place something can go wrong, another input that behaves differently, another assumption that was never written down. We already know what happens when code grows without being fully understood. Bugs sit there for years, sometimes decades, not hidden, just not examined closely enough. Now there are more places to look, and less understanding of what is actually there.

The questions that matter are still the same:

  • Did it remove work?
  • Did it reduce complexity?
  • Did it preserve intent?
  • Did it create a smaller attack surface?
  • Can a human still understand it in one sitting?

If the answer is no, then it is just more code. AI is very good at compressing typing, but that is not the same as compressing thought. Engineering is mostly deciding what not to build, what not to support, and what not to keep. Lose that, and you do not get better systems, you just get larger ones.

The risk is not that AI writes bad code. The risk is that it makes it easy to accept code that should never have been there in the first place. And once it is there, it tends to stay, because removing code still requires understanding it.

Less code is still better. Fewer paths are still better. Clear intent is still better.