The safety framework we’ve built around human error, what the psychologist James Reason called the Swiss cheese model, assumes errors have human authors who can be understood, retrained, and held accountable. Defensive layers work because we know the shape of the holes. That framework doesn’t map onto systems that have no intention and no accountability in any form we recognize. ==So we oscillate between over-trusting AI when it works and rejecting it outright when it fails, and neither response is the calibrated tolerance we’ve spent decades building for human fallibility.==

Source: Human Error is OK! Machine Madness is a No-No! Why? 🤯

The jarring swing between blind trust and rejection rings true. My takeaway is that trust but verify model can only go so far. In my own experience, the sheer volume of content that must be reviewed as part of the process to reach the final outcome is growing exponentially. I cannot read all of that all the time and as a result of that, my brain goes into a pattern matching mode.

It’s done all these specs and plan documents so well until now, it should be okay.

However, every time I’ve found an error, it is in a process that it’s done right many times. Om’s article resonates with me as my reaction to this isn’t that of what I’d attribute with that of a colleague. It’s way closer to a machine and technically, I am also taking on the additional mental load of checking a colleague’s work, who in turn, will get better or be replaced.