Large language models are amazing. Used well, they almost make us superpowered people. Agents today can collect, process, summarize and use a crazy amount of information. Especially in areas and tools that we’re not familiar with, they can be a massive boost because they’ll simply get it done.
A single reason - consistency. Use of AI varies wildly even across our small team - from masochists like me that re-write AI collected things by hand 99% of the time to Ctrl+C Ctrl+V from claude.ai, how we use AI (as the space changes) is still completely up in the air.
The problem with inconsistency is that it becomes very hard for us to stack our work on top of each other. Human work - while slow - used to be more predictable. If one of us ‘read’ or ‘wrote’ something in the before-fore times, there were reasonable expectations that could be had about the output. Today that’s a lot more difficult.

Having watched human-to-human trust (based on these expectations) erode, here’s how it happen: Models are great at reading, summarizing and writing things. Humans take credit for this output, which is usually information + opinion. Other humans inquire about the information or the opinion, and when discovering that the info or opinion did NOT come from the human they expected, trust erodes.
Even more catastrophically, if the human underwriting the information can’t remember or explain the context where the AI did so, trust is almost completely lost.
<aside> 💡
This kind of problem didn’t exist with other things like Excel or calculators. They weren’t stochastic.
</aside>