But the real threat isn’t either of those things. It’s quieter, and more boring, and therefore more dangerous. The real threat is a slow, comfortable drift toward not understanding what you’re doing. Not a dramatic collapse. Not Skynet. Just a generation of researchers who can produce results but can’t produce understanding. Who know what buttons to press but not why those buttons exist. Who can get a paper through peer review but can’t sit in a room with a colleague and explain, from the ground up, why the third term in their expansion has the sign that it does.

Frank Herbert (yeah, I know I’m a nerd), in God Emperor of Dune, has a character observe: “What do such machines really do? They increase the number of things we can do without thinking. Things we do without thinking; there’s the real danger.” Herbert was writing science fiction. I’m writing about my office. The distance between those two things has gotten uncomfortably small.

Source: The machines are fine. I’m worried about us.

Minas Karamanis touches upon the third rail of AI discourse. He beautifully lays out the perils of manufacturing LLM operators versus growing scientists, philosophers, artists etc. Put another way, it’s the pressure that crafts the diamond.

There’s a valid argument on the other side - humans are fundamentally problem solvers and part of that is they will hunt for higher order problems to be solved. In fact, the path to get to higher order problems is already so hard the number of people and the creative loss we face as a result of the failure of the process is way worse today and hence there’s far more scientific breakthroughs that are promised for the future as the tools level up the entirety of humanity to operate on more higher order problems.

Minas counters:

People call this friction “grunt work.” Schwartz uses exactly that phrase, and he’s right that LLMs can remove it. What he doesn’t say, because he already has decades of hard-won intuition and doesn’t need the grunt work anymore, is that for someone who doesn’t yet have that intuition, the grunt work is the work. The boring parts and the important parts are tangled together in a way that you can’t separate in advance. You don’t know which afternoon of debugging was the one that taught you something fundamental about your data until three years later, when you’re working on a completely different problem and the insight surfaces. Serendipity doesn’t come from efficiency. It comes from spending time in the space where the problem lives, getting your hands dirty, making mistakes that nobody asked you to make and learning things nobody assigned you to learn.

and continues to warn of our own ego at play here:

The strange thing is that we already know this. We have always known this. Every physics textbook ever written comes with exercises at the end of each chapter, and every physics professor who has ever stood in front of a lecture hall has said the same thing: you cannot learn physics by watching someone else do it. You have to pick up the pencil. You have to attempt the problem. You have to get it wrong, sit with the wrongness, and figure out where your reasoning broke. Reading the solution manual and nodding along feels like understanding. It is not understanding. Every student who has tried to coast through a problem set by reading the solutions and then bombed the exam knows this in their bones. We have centuries of accumulated pedagogical wisdom telling us that the attempt, including the failed attempt, is where the learning lives. And yet, somehow, when it comes to AI agents, we’ve collectively decided that maybe this time it’s different. That maybe nodding at Claude’s output is a substitute for doing the calculation yourself. It isn’t. We knew that before LLMs existed. We seem to have forgotten it the moment they became convenient. (emphasis mine)

Key warning shot:

But the moment you use the machine to bypass the thinking itself, to let it make the methodological choices, to let it decide what the data means, to let it write the argument while you nod along, you have crossed a line that is very difficult to see and very difficult to uncross. You haven’t saved time. You’ve forfeited the experience that the time was supposed to give you.

The failure mode isn’t malice. It’s convenience. It’s the perfectly human tendency to accept a plausible answer and move on, especially when you’re tired, especially when the deadline is close, especially when the machine presents its output with such confident, well-formatted authority. The problem isn’t that we’ll decide to stop thinking. The problem is that we’ll barely notice when we do.

This last step feels inevitable to me. Constant vigilance of the LLM is a hard job and it’s not what our human brains are good at.

I don’t know what the future holds. However, I am also confident that I can manage what the future holds for me because I know my basics are strong. I know that I am capable of knowing when an LLM seems off. However, it’s absolutely true that I only have that because of the grunt work I did till now.

If I were a betting man, I’d bet that how we use an LLM matters. This is something that I am trying to be vigilant about when raising the next generation. I want them to understand the power and the peril of these systems. However, a part of me wonders if we should even introduce these to untrained brains who don’t yet understand the value of persistence, doing the hard work and how that makes one learn.

Despite being a real life vegetarian, I don’t know if I will pull myself all the way to Generative AI vegetarianism. However, there’s value in being considerate about the usage.