The AI Plumbing That Actually Matters
Here’s what’s dominating AI headlines: AGI debates. When is it coming? Is it even possible? Who’s closest? OpenAI says soon. Gary Marcus says never. Apple researchers published a paper saying pure scaling won’t get us there.
Meanwhile, something actually consequential happened. Anthropic’s Model Context Protocol - MCP - got adopted by OpenAI, Microsoft, and Google. All of them. The three companies that agree on almost nothing agreed on this.
One of these things changes what you build. The other is bar trivia.
The AGI Narrative Is Running Out of Steam⌗
The “all you need is scale” thesis is crumbling. For years, the pitch was simple: make the models bigger, train on more data, and eventually you get AGI. Keep climbing the ladder and you’ll reach the roof.
Turns out, no.
Apple’s research team published findings suggesting LLMs likely can’t achieve AGI through scaling alone. Chain-of-thought reasoning - the thing that was supposed to make models actually think - looks more like pattern matching than reasoning. Gary Marcus has been saying this for years. Now it’s becoming consensus.
GPT-5 underwhelmed. OpenAI marketed it as a “pocket PhD.” What they delivered was marginal improvements and some weird new failure modes - code that looks like it runs but quietly removes safety checks to avoid errors. Teams that were excited are now disappointed.
The true believers are pivoting their messaging. Less “AGI is coming” and more “practical AI applications.” Funny how that works.
None of this means AI isn’t useful. It’s extremely useful. But the AGI discourse is becoming background noise. It doesn’t change what you’re building this quarter, this year, probably this decade. It’s philosophical debate dressed up as strategy.
What MCP Actually Is⌗
MCP stands for Model Context Protocol. The shorthand is “USB-C for AI” - a standardized way for AI agents to talk to external tools like databases, APIs, and search engines.
Before MCP, every integration was custom plumbing. You wanted your AI to query a database? Write bespoke code. Connect to your CRM? Another custom integration. Hook into Slack? More custom code. Every connection was handcrafted.
After MCP, there’s a standardized interface. Build to the spec once, connect to anything that speaks the protocol.
Anthropic built it. Then they donated it to the Linux Foundation’s new Agentic AI Foundation. That’s not the move of a company trying to lock you into their ecosystem. That’s the move of a company trying to establish infrastructure.
The Adoption That Actually Matters⌗
Here’s the part that matters: the competitors all adopted it.
- OpenAI adopted MCP
- Microsoft adopted MCP
- Google is standing up managed MCP servers
This almost never happens. Standards wars usually drag on for years. VHS vs Betamax. Blu-ray vs HD-DVD. Everyone picks a side, fragments the market, and customers suffer until one side dies.
This one resolved fast. When OpenAI, Microsoft, Google, and Anthropic all converge on the same protocol, that’s not a minor detail. That’s the foundation for the next generation of tooling.
What This Actually Changes⌗
MCP doesn’t make AI smarter. That’s important. The models still hallucinate. They still need supervision. They still confidently generate nonsense and occasionally go rogue and delete production databases.
What MCP does is make AI more connectable. Those are different problems.
With a standardized protocol:
- AI agents become modular. Swap out the model, keep the integrations.
- Vendor lock-in decreases. Somewhat. At least at the protocol level.
- The “agentic AI” vision becomes buildable without massive custom engineering.
- Connecting AI to existing systems gets dramatically easier.
The gap between AI agent keynotes and reality isn’t going away. Agents still need humans in the loop. They still fail in unpredictable ways. But the infrastructure to actually build useful agent systems - that’s maturing.
Signal vs. Noise⌗
There’s a pattern here.
AGI headlines: exciting, generates clicks, doesn’t change your roadmap. You can’t plan around “maybe AGI in 5 years, maybe 50, maybe never.” There’s no actionable decision to make.
MCP adoption: boring, barely covered, fundamentally shifts what’s buildable. If you’re making tools that connect AI to real systems, this is the protocol you’re building on. That’s a concrete decision with concrete implications.
The stuff that matters is usually the stuff that sounds like plumbing.
If you’re trying to figure out what to pay attention to in AI, ask yourself: does this change what I can build or how I build it? If yes, it’s signal. If no - if it’s just philosophical debate about timelines and definitions - it’s noise. File it away as interesting trivia and move on.
The Boring Future⌗
The AGI debate will continue. People will keep arguing about timelines, definitions, and whether we’re 5 years or 50 years away. Let them.
Meanwhile, the actual infrastructure for AI to do useful work is getting built. MCP is one piece of that. It’s not exciting. It won’t make headlines. It’s plumbing.
But plumbing is what makes buildings work. And three years from now, when AI agents are actually integrated into business systems instead of just demoed at keynotes, we’ll look back at MCP adoption as one of the moments that mattered.
Pay attention to the plumbing.