The future is waterfall
The future calls for multi-agent orchestration
The future rewards M-shaped people
This is not my original idea - the concept is widespread on the internet. The traditional model has been the T-shaped professional: the vertical bar represents depth of related skills and expertise in a single field, whereas the horizontal bar is the ability to collaborate across disciplines. In software development, this meant being, say, a backend engineer who understands enough frontend and DevOps to collaborate effectively.But LLM doesn’t just allow us to do things better. Contrary to the popular belief that AI accelerates brain rot, I find that motivated people learn faster with AI support. The other day, my staff engineer gave Claude Code access to the Postgres source code and proceeded to drill down some very technical questions that otherwise would be impossible for us to have that expertise in a short amount of time. LLM gives us access to the consultancy we didn’t have before.Instead of knowing one thing really deeply (the hallmark of individual contributors in the past), it allows us to know many things deeply, hence the M-shaped analogy (m - lowercase - would have been better, I was clueless what to take of the capital M initially). This shift is profound for career development. The traditional advice of “specialize or generalize” is becoming obsolete. The future of career advancement lies in being able to connect multiple domains of deep expertise.
The future cares more about latency than elegance
I used to agonize over code. Variable names. Function boundaries. The perfect abstraction. I was taught that good code is elegant code, and elegant code is maintainable code. The logic was sound: you write code once, you read it a hundred times, so optimize for reading.
But what if you write code once and never read it?
Vibe coding can be quite wasteful. I am encouraged not to bother with the perfect abstraction, just focus on the problem at hand. A script to migrate some data. A one-off analysis. A prototype to test an idea. LLM generates it. I run it. I delete it. If I need it again, I regenerate it. The regeneration takes two minutes. The careful crafting would have taken two hours. It is a future of quantity over quality. Many people I know would hate this. I hate it too, reading Zen and the Art of Motorcycle Maintenance in my formative years. But I am afraid this is happening regardless of my preference.
This changes what “good” means. Good code used to mean elegant code. Now good code means fast-to-produce code. Latency is the new elegance. The question isn’t “is this beautiful?” but “how quickly can I get something that works?”
The future doesn’t look kindly on some best practices
This realization sent me down a rabbit hole. How many best practices are solutions to problems that AI simply doesn’t have?
Take “keep functions short.” I was taught 20 lines max. The reasoning: humans have limited working memory. We can’t hold a 500-line function in our heads. So we break things into digestible chunks. But Claude processes massive functions without breaking a sweat. If anything, too many tiny abstractions make it harder for Claude to follow the flow. The function length rule was never about the code. It was about the human brain.
Comments and static documentation that drift from reality within a week? While they might be useful in some specific situation, like the user should not have access to the source code, but if people have access? LLM can just read the actual code and tell me what it does now, not what someone wrote it did six months ago.
Story point estimation? When Claude can prototype something in twenty minutes, “how many points is this?” becomes “let me just try it and see.” The estimation ritual was about managing uncertainty over weeks of human effort. The uncertainty shrinks when the effort shrinks.
Not all practices are heading for the graveyard though. Some survive, but for different reasons. DRY used to exist because humans forget to update all instances when logic changes. Copy-paste five times, change it in four places, spend three hours debugging the fifth. Classic. AI doesn’t have this problem. It can regenerate boilerplate on demand without breaking a sweat.
But DRY still matters. Less code means more context fits in the LLM’s context window. Every duplicated block is tokens that could have been spent helping Claude understand the rest of your codebase. The practice survived, but the why behind it shifted completely.
I have been coding professionally for 15 years now. I tend to look at myself as an old guard of (some) old values. I groan at kids using Python shell to query instead of raw SQL. I feel like in the coming time, dogmatic vs pragmatic will give me some serious cognitive dissonance.
The future cares about audit, not review
If humans aren’t writing the code, and humans aren’t reading the code, how do we know the code is any good? The answer used to be code review. But when LLM generates a 500-line PR in ten minutes, and the 3 previous PRs were reasonably good…my attention drifts.
I find myself caring more about things I can verify without reading every line. The obvious stuff: does it have tests? Do the tests actually test behavior, not just chase coverage? Can I trace requirements to implementation without understanding the implementation itself?
But increasingly I care about the generation process, not just the output. What was the plan before the code was generated? What context did the agent have access to? Enough context? Too little? Too much noise? Did it have the right files in its window, or was it hallucinating an API that doesn’t exist? Did it follow the technical design, or did it improvise?
These questions feel strange. I’m not reviewing the code. I’m reviewing the conditions under which the code was born. It’s like judging a student’s exam by checking whether they had the right textbook open, not by grading their answers.
The future looks like layers of automated verification with humans doing spot checks. AI writes code. Different AI reviews code. Static analysis runs. Humans audit the audit - sampling PRs to calibrate confidence, checking that the verification system itself is trustworthy.
Artisanal code review is declining. Not because the wisdom doesn’t matter, but because it is not always required.
---
Tldr; The future is fat Jira tickets, multi-agent orchestration humming through the night. It needs M-shaped people who can man the beast - with enough depth across domains to know when the machine is producing gold versus garbage.
