š The machines are fine. I'm worried about us.
But the real threat isnāt either of those things. Itās quieter, and more boring, and therefore more dangerous. The real threat is a slow, comfortable drift toward not understanding what youāre doing. Not a dramatic collapse. Not Skynet. Just a generation of researchers who can produce results but canāt produce understanding. Who know what buttons to press but not why those buttons exist. Who can get a paper through peer review but canāt sit in a room with a colleague and explain, from the ground up, why the third term in their expansion has the sign that it does.
Frank Herbert (yeah, I know Iām a nerd), in God Emperor of Dune, has a character observe: āWhat do such machines really do? They increase the number of things we can do without thinking. Things we do without thinking; thereās the real danger.ā Herbert was writing science fiction. Iām writing about my office. The distance between those two things has gotten uncomfortably small.
Source: The machines are fine. Iām worried about us.
Minas Karamanis touches upon the third rail of AI discourse. He beautifully lays out the perils of manufacturing LLM operators versus growing scientists, philosophers, artists etc. Put another way, itās the pressure that crafts the diamond.
Thereās a valid argument on the other side - humans are fundamentally problem solvers and part of that is they will hunt for higher order problems to be solved. In fact, the path to get to higher order problems is already so hard the number of people and the creative loss we face as a result of the failure of the process is way worse today and hence thereās far more scientific breakthroughs that are promised for the future as the tools level up the entirety of humanity to operate on more higher order problems.
Minas counters:
People call this friction āgrunt work.ā Schwartz uses exactly that phrase, and heās right that LLMs can remove it. What he doesnāt say, because he already has decades of hard-won intuition and doesnāt need the grunt work anymore, is that for someone who doesnāt yet have that intuition, the grunt workĀ isĀ the work. The boring parts and the important parts are tangled together in a way that you canāt separate in advance. You donāt know which afternoon of debugging was the one that taught you something fundamental about your data until three years later, when youāre working on a completely different problem and the insight surfaces. Serendipity doesnāt come from efficiency. It comes from spending time in the space where the problem lives, getting your hands dirty, making mistakes that nobody asked you to make and learning things nobody assigned you to learn.
and continues to warn of our own ego at play here:
The strange thing is that we already know this. We have always known this. Every physics textbook ever written comes with exercises at the end of each chapter, and every physics professor who has ever stood in front of a lecture hall has said the same thing: you cannot learn physics by watching someone else do it. You have to pick up the pencil. You have to attempt the problem. You have to get it wrong, sit with the wrongness, and figure out where your reasoning broke. Reading the solution manual and nodding along feels like understanding. It is not understanding. Every student who has tried to coast through a problem set by reading the solutions and then bombed the exam knows this in their bones. We have centuries of accumulated pedagogical wisdom telling us that the attempt, including the failed attempt, is where the learning lives. And yet, somehow, when it comes to AI agents, weāve collectively decided that maybe this time itās different. That maybe nodding at Claudeās output is a substitute for doing the calculation yourself. It isnāt. We knew that before LLMs existed. We seem to have forgotten it the moment they became convenient. (emphasis mine)
Key warning shot:
But the moment you use the machine to bypass the thinking itself, to let it make the methodological choices, to let it decide what the data means, to let it write the argument while you nod along, you have crossed a line that is very difficult to see and very difficult to uncross. You havenāt saved time. Youāve forfeited the experience that the time was supposed to give you.
The failure mode isnāt malice. Itās convenience. Itās the perfectly human tendency to accept a plausible answer and move on, especially when youāre tired, especially when the deadline is close, especially when the machine presents its output with such confident, well-formatted authority. The problem isnāt that weāll decide to stop thinking. The problem is that weāll barely notice when we do.
This last step feels inevitable to me. Constant vigilance of the LLM is a hard job and itās not what our human brains are good at.
I donāt know what the future holds. However, I am also confident that I can manage what the future holds for me because I know my basics are strong. I know that I am capable of knowing when an LLM seems off. However, itās absolutely true that I only have that because of the grunt work I did till now.
If I were a betting man, Iād bet that how we use an LLM matters. This is something that I am trying to be vigilant about when raising the next generation. I want them to understand the power and the peril of these systems. However, a part of me wonders if we should even introduce these to untrained brains who donāt yet understand the value of persistence, doing the hard work and how that makes one learn.
Despite being a real life vegetarian, I donāt know if I will pull myself all the way to Generative AI vegetarianism. However, thereās value in being considerate about the usage.