Model Lock-in is the New Vendor Lock-in

Model lock-in is the new vendor lock-in. And it may be worse.

We've all heard of vendor lock-in. The risk of becoming too dependent on a single supplier and ending up exposed to price increases and high switching costs.

In the world of AI, an even riskier dynamic is emerging: model lock-in. The frontier model layer is commoditizing over time. For many common enterprise use-cases, the top models are increasingly fungible and regularly leap-frog each other in evals. The differences are real, but not decisive for every use-case.

AI labs like Anthropic and OpenAI understand this. They are investing billions to build massive data centers and capture as much long-term demand as possible. To reinforce that position, they are surrounding their models with proprietary tooling sold through user subscriptions and enterprise plans.

The strategy is straightforward: get companies trained on tools that only work with one provider's model, and over time make switching operationally and politically expensive.

This risk is distinct because inference economics are very different from software economics. Margins are thinner and costs are uncertain. Much of today's pricing reflects intense competition subsidized growth rather than steady-state reality.

If inference costs rise in the future (note: per-token price decreases are different than per-query!), companies that trained their staff on tooling from a model provider may find their own margins at the mercy of that provider.

That's a fragile place to be.

The safer path is to stay model-agnostic:
- Train staff on tools that can work across model providers
- Favor platforms and workflows that can swap models easily
- Build your own tooling that works with all major API protocols
- Or, use independent model-agnostic SaaS vendors
- Preserve the ability to adopt new models as they emerge
- Even though local open-weight models aren't top-tier today, keep that path open

Relying heavily on tooling from the major labs may feel convenient today. Strategically, it's an avoidable mistake.


Originally posted on LinkedIn on January 17, 2026.

← All posts
aitechnologyphilosophy