The core idea is the Theory of Constraints, and it goes like this:
Every system has exactly one constraint. One bottleneck. The throughput of your entire system is determined by the throughput of that bottleneck. Nothing else matters until you fix the bottleneck.
That’s the part most people get. Here’s the part they don’t, and it’s the part that should scare you:
When you optimise a step that is not the bottleneck, you don’t get a faster system. You get a more broken one.
Think about it mechanically. If station A produces widgets faster but station B (the bottleneck) can still only process them at the same rate, all you’ve done is create a pile of unfinished widgets between A and B. Inventory goes up. Lead time goes up. The people at station B are now drowning. The pile creates confusion about what to work on next. Quality tanks because everyone’s triaging instead of thinking.
Source: if you thought the speed of writing code was your problem - you have bigger problems
Andrew Murphy, who unfortunately has too famous of a last name to call this Murphy’s law, nails the issue. When you measure the wrong metric in an output and call it productivity, you often end up making the system worse.
This is why people burn out and leaders often blame the people for not “working hard.” Throughput is truly understanding the system and focusing on the bottleneck. This is why “blockchain” wasn’t the solution to better internet businesses and why “LLMs” are not going to be the panacea for productivity problems.
Let me state it another way (to prevent someone misreading this as me dissing LLMs): I am saying LLMs are a tool, not the solution to all productivity problems. The solution to productivity problems are often harder to solve than a simple tool.