Large language models are amazing. Used well, they almost make us superpowered people. Agents today can collect, process, summarize and use a crazy amount of information. Especially in areas and tools that we’re not familiar with, they can be a massive boost because they’ll simply get it done.

So why do we need an AI policy?

A single reason - consistency. Use of AI varies wildly even across our small team - from masochists like me that re-write AI collected things by hand 99% of the time to Ctrl+C Ctrl+V from claude.ai, how we use AI (as the space changes) is still completely up in the air.

The problem with inconsistency is that it becomes very hard for us to stack our work on top of each other. Human work - while slow - used to be more predictable. If one of us ‘read’ or ‘wrote’ something in the before-fore times, there were reasonable expectations that could be had about the output. Today that’s a lot more difficult.

image.png

Having watched human-to-human trust (based on these expectations) erode, here’s how it happen: Models are great at reading, summarizing and writing things. Humans take credit for this output, which is usually information + opinion. Other humans inquire about the information or the opinion, and when discovering that the info or opinion did NOT come from the human they expected, trust erodes.

Even more catastrophically, if the human underwriting the information can’t remember or explain the context where the AI did so, trust is almost completely lost.

<aside> 💡

This kind of problem didn’t exist with other things like Excel or calculators. They weren’t stochastic.

</aside>

The Actual AI Policy

  1. Use any and all tools you would like. If you use it frequently for work, talk to Hrishi about getting it covered.
  2. If there is general purpose intelligence involved at any stage in your tool (like an LLM), treat it like a coworker (or another human somewhere else), which means:
  3. Keep receipts. Any and all important words or opinions in your work should be traceable - either back to your brain (at which point you’re responsible), or to the specific chat/tool you got it from.
  4. Watch for tone. English is a wonderful language, but it’s also a language where it’s impossible to separate tone from information. Don’t place something down that you might not write yourself.
  5. Watch for slop. AIs are hyper-verbal (with code and with writing). If something was so long you didn’t read it, please don’t make us. Cut it.