The Infinite Software Crisis: When Code Becomes Free, Understanding Becomes Priceless
AI makes writing code essentially free. But when code costs nothing, understanding becomes impossible. Every generation faces a software crisis and ours is a particularly tricky one.
Continuing from our last discussion, where we talked about how AI changes nothing for Product Success – you still need creativity for marketing, taste for finding aha moments, and deep architectural thinking for retention.
But now I’ve been thinking more about the engineering side of things. And there’s something darker happening that nobody really wants to talk about.
I’ve shipped code I don’t understand.
Generated it. Tested it. Deployed it. Couldn’t explain how it worked if my life depended on it. And I know you have too.
If AI doesn’t solve product challenges, what does it solve? It solves typing. It makes writing code essentially free. Zero cost. But here’s what nobody’s addressing: When code costs nothing to write, understanding what that code does becomes impossible.
We’re entering an era where writing code costs roughly zero. The main scarce resource becomes understanding the system (context and comprehension). In the past, you could read the code and understand how it works. Now? A junior developer can generate 10,000 lines in an evening. That becomes impossible.
This isn’t the first software crisis. Every generation has faced theirs. But ours is different – it’s infinite. Past crises: we couldn’t build fast enough. Now? We can build infinitely fast. But we can’t understand fast enough.
AI has simply accelerated this process to the extreme.
The Pattern That Never Ends
Let’s look at history. Because it keeps repeating itself.
Late 1960s: The first software crisis. Demand for software exploded, but we couldn’t keep up. Dijkstra once said: when we had weak computers, programming was a mild problem. Now we have gigantic computers, and programming is a gigantic problem.
Then came the cycle.
1970s: C language lets us build bigger systems.
1980s: Personal computers – now everyone writes software.
1990s: Object-oriented programming. Thanks, Java, for those inheritance nightmares.
2000s: Agile, sprints, scrum masters.
2010s: Cloud, mobile, DevOps – software eats the world.
Today: AI – code as fast as we can describe it.
Notice the pattern? Each solution promised to solve complexity. In practice, it only allowed us to build even more convoluted systems.
Fred Brooks, American software engineer and computer scientist, wrote “No Silver Bullet” in 1986. His argument was: No single innovation will give us an order-of-magnitude improvement in productivity. Why? Because the hard part was never the mechanics – syntax, typing, boilerplate. The hard part: understanding the actual problem. Designing the solution.
Every tool we’ve created makes the mechanics easier. The core challenge stays just as hard.
Remember in the last episode when I said AI doesn’t make building great products easier? Same thing here. It doesn’t make building maintainable systems easier either.
So why do we keep optimizing for the wrong thing?
Because we confuse two words: easy and simple.
Rich Hickey, a computer programmer and speaker, known as the creator of the Clojure programming language, defined it perfectly:
Easy = what’s at hand. Copying from Stack Overflow. Asking AI to generate a long slab of code.
Simple = the absence of entanglement. One piece doing one thing.
Human nature: We always choose easy. The old balance used to work because complexity accumulated slowly. We had time to refactor. AI destroyed that balance. AI makes the “easy” part almost free, but it destroys “simple,” because agents can’t distinguish between essential complexity of the problem and accidental complexity (the legacy of hacks and workarounds).
Here’s how it happens. You start: “Add OAuth to the app.” Clean OAuth.js file appears. Then: “Also add this feature.” Keep going: “Fix this error.”
By turn 20, you’re not having a discussion anymore. You’re managing chaos.
Developing through this chat-based conversational interface is a trap. Architecture starts to mirror the conversation itself: every follow-up like “and also fix this” piles on top of the previous one, creating spaghetti code with dead branches of logic.
When I spend enough time working with a code-generation systems (hello Claude Code), something becomes obvious very quickly: I’m not designing architecture so much as watching it emerge. I can trace how each prompt shapes the repository in real time, seeing structure crystallize almost immediately. What’s unsettling isn’t the result – it’s how fast the momentum builds.
So here’s why AI makes it worse: It treats every line of code as a pattern to preserve. AI saw that authentication check on line 47? Using it as a pattern. That weird hack from 2019? Also a pattern. Technical debt doesn’t register as debt– it’s just more code.
Fred Brooks identified two types of complexity:
Essential complexity: the fundamental problem you’re solving. Users need to pay for things;
Accidental complexity: everything we added along the way (workarounds, defensive code, legacy decisions);
In a real codebase, these get so tangled that separating them requires context, history, experience. AI can’t distinguish between them. Humans can. When we slow down to think.
But we’re not slowing down. We’re accelerating.
Stop Outsourcing Your Thinking
When code generation is instant but understanding takes hours (or days, or never) – we have a problem.
So what solution do we actually have? We’re still here to find a practical way to solve the problem, not just complain about it.
The resource in demand is no longer writing code. It’s understanding the system.
Last time I said you can’t prompt engineer your way to good product sense. Same here – you can’t prompt engineer your way to good architectural sense.
To avoid drowning in chaos, we need to bring back that old-school engineering mindset. Not a methodology. Just fundamentals.
Phase 1: Research. Feed the agent documentation and diagrams so it can build a map of the system. This is NOT one-shot. Probe it: “What about caching?” “How does this handle failures?” When it’s wrong, correct it manually. Output: A single or multiple research documents compressing hours of exploration into minutes of reading.
But, be sure to read it carefully and correct it manually first. This checkpoint is critical. You validate this against reality. Catch errors now (to prevent disasters later).
Phase 2: Planning. Write a specification of the changes – down to function signatures. The plan should be detailed enough that a junior developer could implement it mechanically, paint-by-numbers style. This is where architectural decisions happen. Service boundaries. Preventing unnecessary coupling. We spot problems before they happen because we’ve lived through them. AI doesn’t have that option (it just treats every pattern as valid).
The magic: You can validate this plan in minutes and know exactly what will be built.
Phase 3: Implementation. Only now start generation. When AI has a clear specification, context stays clean and focused. No 50-message evolutionary conversations. Three focused outputs, each validated before proceeding. You review whether the result matches the plan, rather than trying to guess what the AI imagined.
The payoff: You can use a background agent or a smaller model because you’ve done the thinking. Review is fast because you’re verifying it followed the plan.
The Generation That Never Learned to Read Code
The three-phase approach bridges a critical gap. But let’s talk about what that gap actually is.
Pattern recognition doesn’t come from reading documentation. It comes from being burned. That instinct that says “this is getting too complex”? That’s accumulated scar tissue from production incidents. From being up at 3 AM debugging a cascade failure because someone nested five layers of abstraction.
When I spot a dangerous architecture now, it’s because I’ve maintained the alternative. When I push for simpler solutions, it’s because I’ve lived through the complex ones breaking in ways nobody predicted.
AI generates what you ask for. It doesn’t encode these lessons from past failures.
Every time we skip thinking to keep up with generation speed, we’re not just adding code we don’t understand. We’re losing our ability to recognize problems before they happen. That instinct atrophies when you stop reading code deeply enough to see the patterns (basically to train your own LLM in your own head).
Naturally, a question arises: What happens to the next generation?
Tech leads will shift from code reviews to design reviews and plan validation. That part’s clear. But what about juniors? The ones learning to code now? They’re learning to prompt, not to recognize patterns. They’re learning to generate, not to trace a request end-to-end until they truly understand every dependency.
How do you teach architectural instinct when nobody reads code anymore? How do you build pattern recognition when the code appears instantly and you never had to struggle through building it yourself?
There’s a gap emerging between engineers who debugged production systems and engineers who only know generation. The seniors carry context that can’t be transferred through prompts. The juniors have speed but lack the scar tissue that teaches judgment.
Those who continue to simply “chat with the code” will find themselves with production systems that are impossible to maintain or change safely. Not because they lack skill. Because they never built the instinct for recognizing complexity before it becomes a crisis.
The developers who thrive won’t be those who generate the most code. They’ll be those who understand what they’re building.
Yes, AI changes everything about how we write code. But it changes nothing about why software fails.
Every generation has faced their software crisis. Dijkstra’s generation created the discipline of software engineering in response. Now we face ours with infinite ai-code generation.
Here’s what we should have known all along: Writing code was always redundant. Someone had to do it, sure. But the real task was design. And surprise – it never went away. AI just removed the excuse we had for not thinking deeply enough about it.
Last time I said: “The difficulty is the point. That’s where all the fun comes from. The purpose. The meaning.” Same applies here. The thinking, the understanding, the architectural insight – that’s the work. That’s what separates good engineers from code generators.
The question isn’t whether we’ll use AI (I guess that ship has sailed). The question is whether we’ll still understand our own systems when AI is writing most of our code.
These thoughts were crystallized after watching an excellent talk by Jake Nations, a staff engineer at Netflix working on AI tool adoption.
See you in the next one.
🔎 Explore more:



