Looking back, I feel like Rails kicked off the last great cycle of dev tooling. That was 2004. Then the open-source ecosystem took over: new frameworks, new languages, new deployment patterns, new standards, one after another for twenty years. Git. Prototype. jQuery. Jenkins. Kafka. Docker. React. Kubernetes. Terraform. Tensorflow.
The churn was already too fast for most enterprises to keep up with. But it didn’t matter, because they weren’t trying to. Startups, big tech and HFT shops rode the bleeding edge because staying current was their competitive advantage. For everyone else, technology was a means to an end, not the business itself. They valued long backward-compatibility loops and preferred buying over building - and that preference created the golden age of SaaS. Jira for planning. GitHub for code. VS Code for editing. CI/CD for shipping. Teach an IT team the stack in a few months and they’d be productive for years.
That held until about eighteen months ago. Now, AI agents write code. New models ship every quarter and each one changes what’s possible. An employee’s first month on the job now includes tools that didn’t exist when the offer letter went out. What worked six months ago is already behind.
Every company is responding. If you take current AI strategy advice at face value, the answer is obvious. Every company should be trying to go AI Native right now. Hire a Head of AI. Stand up a Centre of Excellence. Reorganise engineering around agents. Redesign the workflow. Redesign the org.
I now believe that’s the wrong conclusion.
What you need is a business that does things your competitors still can’t.
Automating processes that cost too much to automate. Building internal tools that were never worth building. Operating at speeds the pre-AI org couldn’t reach.
Going AI Native is one way to get there. For most companies, it’s the wrong way.
In 1972, David Parnas published a paper called “On the Criteria To Be Used in Decomposing Systems into Modules.” It’s one of those papers that’s more cited than read, which is a shame, because the core idea is devastatingly simple. The most natural way to break a complex system into parts - by function, by what each part does - is usually wrong. The right way is to decompose by what’s likely to change. Each module should hide one decision that might change, behind an interface that doesn’t.
Parnas was not a man who softened his conclusions. He resigned from Reagan’s Star Wars advisory panel because he concluded the software couldn’t be made trustworthy - and said so publicly.
Most companies are making exactly the mistake Parnas described - but at the organisational level, not the code level. The AI response gets assigned to the function that sounds closest - IT, engineering - and the org chart stays the same.
The thing most likely to change right now is how software gets built.
Not one aspect of it. All of it - the tooling, the workflows, the team topology, the hiring profiles, the cost structures, the quality assurance process, the way institutional knowledge gets created and retained. Assign that to a department and you’ve created the wrong modules.
You now have a 2022 engineering org with a chatbot bolted on.
Say you get the decomposition right. You reorganise around it. Hire differently, restructure teams, adopt agentic workflows, rethink quality assurance. You do the hard work.
Conway’s Law says it won’t hold.
In 1968, Melvin Conway - the programmer whose best idea was rejected by Harvard Business Review - observed that organisations produce systems whose structure mirrors their own communication structure. HBR didn’t think much of this. Fred Brooks cited it in The Mythical Man-Month seven years later.
If your engineering org is structured around 2022 assumptions - sprint-based delivery, human-only code review, manual QA, siloed teams owning features - then your output will reflect those assumptions.
Sure, you can give every engineer Claude Code. If the org is structured for waterfall-with-sprints, the AI writes code faster but everything around the code moves at the same speed. Conway guarantees it.
Staying AI Native is harder than going AI Native.
The tooling changes every quarter. New model capabilities invalidate existing workflows. Cost curves shift monthly. The best practices of January are obsolete by June. The org that doesn’t reorg ships the wrong system. Conway’s ratchet. It clicks every quarter and it never clicks back.
Stop reshaping and the org calcifies around yesterday’s tools. You’re no longer AI Native. You’re a 2025 org in 2026 - which is exactly as bad as being a 2022 org in 2025.
Enterprise procurement, vendor evaluation, and tooling rollout cycles were built for an era when core tools changed every two to three years. Those cycles still run on twelve-to-eighteen-month clocks. The procurement process itself is a Conway artefact - it mirrors the rate of change of the era that built it. By the time the org has evaluated, procured, and rolled out a tool, the tool is already behind.
For companies whose core business isn’t software engineering - insurance, logistics, healthcare, finance - this is structurally hard to sustain. The engineering org exists to serve the core business. Asking it to also continuously redesign itself around a shifting paradigm is asking it to serve two masters. And the second master moves relentlessly every quarter.
It gets worse. Everything above assumes the demand for software stays roughly constant - that you’re building the same things, just with better tools. That assumption is wrong.
In 1865, William Stanley Jevons noticed something counterintuitive about coal. As steam engines became more efficient, the obvious prediction was that Britain would burn less coal. The opposite happened. Efficiency made coal-powered machinery economical for uses that had previously been too expensive. Demand exploded. Total coal consumption rose as the engines got better. This is Jevons’ Paradox: make something dramatically cheaper to produce and you don’t get less of it. You get vastly more.
When the cost of building software drops by an order of magnitude, every process that was too expensive to automate becomes worth automating. Every internal tool that was never worth building becomes trivial to build. Every one-off integration, every custom workflow, every bespoke reporting dashboard - capabilities the business couldn’t have before.
If a company claims to have transformed but the business itself isn’t visibly accelerating, the transformation hasn’t taken hold.
This demand explosion triggers Conway’s Ratchet again - at a higher scale.
The org that just finished reshaping itself to build software ten times faster now has to reshape itself to handle ten times the demand for software. The ratchet doesn’t just mean keeping up with tooling changes. It means keeping up with the demand that those tooling changes unleash.
The chain of problems never terminates: adopt new tools, reshape the org, handle the demand explosion, reshape the org again.
Parnas had an answer for this too - it’s the same paper.
When a decision is volatile - when it’s likely to change - hide it behind a stable interface. The interface stays the same. The implementation behind it changes as often as it needs to.
The volatile decision right now is everything about how software gets built.
All of it in flux. All of it changing faster than any single company’s org can absorb - unless absorbing that change is the company’s entire reason for existing.
A company like that is a Parnas module.
Amazon did this to supply chains. Before Amazon, logistics was broken and error-prone, and customers felt every mistake - late deliveries, lost packages, painful returns. Amazon encapsulated it behind a smooth interface. The customer never sees the warehouse robotics, the route optimisation, the carrier negotiations. They see a button that says “Buy Now” and a package that arrives on time.
The module does the same thing for software development. The customer says what they need built. The module builds it, maintains it, keeps it current. The tooling behind the interface changes every quarter. The interface doesn’t.
The customer doesn’t need to know the module switched from Cursor to Claude Code last month. Doesn’t need to know the last model drop cut costs by 40%. Doesn’t need to know that the engineering team restructured twice this quarter to absorb a capability that didn’t exist in January. The customer needs working software that keeps working. Everything else is behind the interface.
Parnas, applied to an organisational problem at industry scale.
The module absorbs Conway’s Ratchet. The customer handles their core business. The module handles the tooling churn, the org reshaping, the demand explosion. The customer gets capabilities that weren’t previously possible - delivered and maintained behind an interface that doesn’t change every time the industry does.
Build the module yourself - an entire company within your company, perpetually reshaping itself without disrupting the business.
Or hire one that already exists.
Cross-posted on Twitter/X.