Inspired by ordep.dev’s post: “Writing Code Was Never the Bottleneck” and Theo’s talk on SDLC and iteration speed
For as long as I’ve been building software, one thing has been clear: the slowest part of the process has never been typing out the code. The real delays happen in everything that surrounds it — understanding the problem, aligning on a solution, reviewing, testing, debugging, and keeping the system healthy over time.
The software development lifecycle (SDLC) has always been more about coordination, clarity, and correctness than raw typing speed. Now, with large language models (LLMs) able to generate working code in seconds, it’s tempting to think we’ve removed the bottleneck. But in reality, we’ve only moved it.
The Bottlenecks Before LLMs
In a traditional SDLC, the slowdowns are predictable:
- Code reviews take time because reviewers need to understand the intent, not just the syntax.
- Knowledge transfer through mentoring, pairing, and documentation is slow but necessary.
- Testing and debugging require careful thought and iteration.
- Coordination overhead — tickets, planning meetings, and agile rituals — eats into actual build time.
These steps exist to ensure quality, but they also slow down delivery. They require shared understanding, not just working code.
What LLMs Actually Change
LLMs like GPT, Claude, and others can produce functional code quickly. They can scaffold features, write boilerplate, and even suggest optimizations. This changes one thing: the time to first draft drops dramatically.
But the rest of the process remains:
- The code still needs to be understood by humans.
- It still needs to be reviewed for correctness, security, and maintainability.
- It still needs to be integrated into existing systems without breaking them.
- It still needs to be tested against real-world scenarios and edge cases.
In fact, these steps can become harder when the code is generated by a model:
- It’s unclear whether the author fully understands what they submitted.
- The generated code may introduce unfamiliar patterns or break established conventions.
- Edge cases and unintended side effects may not be obvious.
The result: more code flows into the system faster, but the verification and integration stages become the new choke points.
The Cost of Understanding
The biggest cost in software is understanding it — not writing it.
Every line of code is a liability until it’s proven correct, maintainable, and aligned with the system’s goals. LLMs don’t reduce the mental effort required to reason about behavior, identify subtle bugs, or ensure long-term maintainability.
If anything, the cost of understanding can increase when reviewers have to reverse-engineer the reasoning behind generated code.
The Risk of Skipping Shared Context
Software engineering is collaborative. It depends on shared understanding, alignment, and trust. When code is generated faster than it can be discussed or reviewed, teams risk assuming quality instead of ensuring it. This creates silent technical debt — code that works now but becomes a maintenance problem later.
Where LLMs Actually Help
The real value of LLMs is in prototyping and iteration:
- Quickly testing an idea before committing to a full build.
- Creating throwaway code to explore solutions.
- Automating repetitive or boilerplate-heavy tasks.
- Reducing the time to the next realization — the moment you learn something new about the problem or solution.
In these contexts, speed matters more than polish, and the cost of mistakes is low.
How the SDLC is Evolving
The traditional SDLC often looks like this:
- Research
- Design
- Spec
- Build
- Ship
- Pray
- Deprecate because it didn’t work
A more adaptive, LLM-augmented process could look like this:
- Identify the problem
- Prototype quickly (using LLMs if useful)
- Collect feedback
- Iterate until the idea is solid
- Write a minimal spec for how to build it right
- Build the production version
- Beta test
- Collect feedback and refine
The key difference is front-loading learning and reducing the time to the next realization. Instead of spending months on specs and planning before writing a line of code, teams can validate ideas in days or weeks, then invest in productionizing only what’s proven valuable.
The Hard Truth
LLMs don’t remove the need for clear thinking, careful review, and thoughtful design. They don’t eliminate the human work of aligning on goals, understanding trade-offs, and maintaining systems over time.
Yes, the cost of writing code has dropped. But the cost of making sense of it together as a team hasn’t. That’s still the bottleneck.
If we want faster, better software, we need to focus less on typing speed and more on shortening feedback loops, improving shared understanding, and making the review and integration process more efficient. LLMs can help with that — but only if we use them to accelerate learning, not just to produce more code.