LLMs collapse the product loop

Two weeks, two side projects, two very different partners: one deep in code, the other steeped in business ops. Building with both made something obvious: large language models aren’t just engineering accelerants. They collapse the entire product loop.

For the past decade, most teams have organized around a trinity - PM, UX, Eng. Because engineering was scarce and expensive, we optimized for engineering leverage. The culture drifted toward documents and handoffs: PRDs, mocks, design docs, reviews. “Handoff” became the milestone; building became the after-party. As a result, learning from customers slid months to the right.

Somewhere in there, “throwaway work” became a slur. Prototyping without a path to production was treated as waste. The best engineers quietly ignored that taboo, hacked something together, and came back with actual signal. Everyone else waited for the next review.

LLMs change the math. They parallelize the trinity and radically reduce the cost of being wrong.

The point isn’t that AI writes production code; it shrinks time-to-signal. Treat prototypes as wind-tunnel models: not for flying, for learning how air hits your wing.

A faster loop, by role

Run a 72-hour Learning Sprint: build just enough to put in front of five users. Kill fast or iterate. “Throwaway” code is paid research.

Change the scoreboard

If we continue to (over) optimize for engineering utilization, we will keep writing documents. If we optimize for learning, we will ship smaller bets - faster. Here are some metrics that might help you think about it better:

Make OKRs smaller and more numerous. Measure the loop, not the launch. Celebrate the team that invalidates shiny idea in 3 days.

Regulated shouldn't mean slow

Finance and health have longer cycles, but the pattern should hold.

For the long term, we have to bring the regulators along also and we need them to move from blockers -> collaborators.

Make product fun again

The joy of product is the rapid loop: talk to users, build a little, learn a lot. LLMs give that loop back. Use them to parallelize the trinity, to lower the cost of being wrong, and to move learning left. Optimize for time-to-signal, and the rest of the process starts behaving.

Ship the loop, not the doc.