Why I Didn’t Choose Other AI Providers
When I tell people I run most of my workflow on GPT-5 and OpenAI, the next question is usually: “Why not use Anthropic? Or Mistral? Or an open-source model?”
It’s a fair question. The ecosystem has never been richer, and there are real strengths in the alternatives. But after testing most of them in my actual day-to-day flow—agency work, software builds, context switching, and Labs writing—I keep coming back to OpenAI. Here’s why.
Anthropic (Claude)
What it’s great at:
- Long context windows—Claude can read giant docs without blinking.
- Friendly tone—often feels more conversational and careful with language.
Why I didn’t choose it as my base:
- The outputs are slower, especially with very large prompts.
- It sometimes hedges too much, which makes it less direct when I need clarity fast.
- Ecosystem fit isn’t as tight. It’s not as seamless inside VS Code or my existing toolchain.
I do use Claude occasionally for mega-summaries, but not as my daily driver.
Mistral
What it’s great at:
- Speed and cost—very efficient, especially for lighter tasks.
- Open weights—good if you want to self-host or fine-tune.
Why I didn’t choose it as my base:
- The outputs feel “snappier” but not as deep or consistent.
- Running my own infrastructure to unlock the best of Mistral isn’t worth the overhead for me right now.
- Ecosystem support is still younger compared to OpenAI’s integration network.
Mistral is exciting, but it’s not yet enough to anchor my entire workflow.
Open-Source Models (LLaMA, Mixtral, etc.)
What they’re great at:
- Full control—run them on your own hardware.
- Privacy—no data leaves your environment.
- Flexibility—fine-tune for niche domains.
Why I didn’t choose them as my base:
- The cost of GPUs, maintenance, and updates adds up fast.
- Performance gap is closing, but GPT-5 is still stronger across mixed use cases.
- I don’t want to run my own model ops team just to get what OpenAI already delivers with reliability.
For me, it’s the classic “build vs. buy” equation. I could build, but it’s not where my leverage is right now.
Others (Google Gemini, Perplexity, etc.)
I’ve tried them. Each has moments of brilliance. But none has matched the balance of reliability, integrations, and ecosystem support that OpenAI provides me today.
Why OpenAI Still Wins for Me
- Ecosystem Fit: Copilot Pro in VS Code, APIs for my apps, ChatGPT app for daily notes. Everything snaps together.
- Reliability: I can trust GPT-5 to consistently give me structured, usable outputs across domains.
- Momentum: My workflow has grown up alongside OpenAI. That history compounds—I don’t need to re-invent my stack.
Closing Thought
I’m not anti-alternatives. I think Anthropic, Mistral, and open-source LLMs all push the field forward in ways we need. But my measure isn’t “who has the most tokens or the flashiest benchmark.” It’s: “Who can I trust to sit at the center of my workflow, day in and day out?”
Right now, that’s still GPT-5 and OpenAI.