I need you to deep research and build me proper standalone documentation to access LLM models from at least the three main providers, OpenAI, Anthropic, and Google Gemini. I'm working in Typescript.

We want to know the state of the art method for accessing all the models with model list, pricing, how to access reasoning models vs non-reasoning models, how to get access to thinking tokens when available? Idiosyncracies and known issues (check forums), max tokens, response formats, how streaming works, differences between sdk versions, completely new features, json mode, schema support, prefill, differences in behavior, response formats ideally well-typed, token counting and cost calculation, how multi-modal data is processing both for input and output and best practices there, preprocessing.

This is now spread out across a large number of documents and varying things, so get multiple sources and corroborate - feel free to mention things that are contradicting or anything you couldn't find that I should find myself.

I need you to do a very, very deep dive and make me a doc that can function as a standalone replacement for all the api docs in building something that can connect to any one these. Not just the best paths but a proper spec. Beware guides not from first party that are just repeating the same thing.