Model Lock-in is the New Vendor Lock-in
Model lock-in is the new vendor lock-in. And it may be worse.
Model lock-in is the new vendor lock-in. And it may be worse.
ClaudeCode, codex, opencode, etc. - these tools are about way more than just "code". Or equivalently, we vastly underestimate the versatility of code.
We're drowning in tools. As AI makes it trivial to build software for extremely narrow problems, we risk creating a tool zoo that demands more attention than it saves.

Above is an illustration of an end-to-end analytical workflow. I've been putting a bunch of puzzle pieces together in VerbaGPT to be able to run such a workflow autonomously. I was finally able to test it.

Contrary to what AI hypesters or doomers may have us believe, our AI future isn't written or inevitable.

In the movie Matrix, there is this great scene where Neo points to a helicopter and asks Trinity "Can you fly that thing?". Trinity responds "Not yet", and asks the operator to remotely upload the skill to her brain. Moments later she goes, "I can fly that thing now".
The new "agent skills" protocol for LLMs reminds me of that scene.

6,000+ worlds discovered — and we've barely looked at most of the sky.
I've written before that using LLMs with numeric data—like databases—can actually reduce hallucinations. LLMs aren't great with numbers or freelancing with facts, but they are very good at writing code. When you lean into that strength, the results can be impressive.
Welcome to the upside-Dodown!
A friend at a party might tell you to buy that hot new stock or IPO. Or the stock market might tank tomorrow and you may want to pull your money out. Or you might be tempted into active management strategies with higher fees. Resist the urge!
Speculation comes with higher risk, and you don't hear from the vast majority of people that lose money chasing higher returns. It is human nature to share stories that paint oneself in a good/smarter light, and so you only hear stories of stock-picking or market-timing from the survivors of the wreckage that is those two activities. This is something called survivorship bias.
We're witnessing the rise of a peculiar phenomenon in AI: agents that throw massive amounts of compute at problems that could be solved with a fraction of the tokens. It's like watching someone use a sledgehammer to crack a walnut—technically effective, but wildly inefficient.