MCP Is Not Dead. You’re Just Using It Wrong.

Every few months, software engineering picks a technology to bury. In early 2026, that target is the Model Context Protocol (MCP). Perplexity’s CTO says it bloats context windows. Cloudflare has moved toward code generation. A wave of high-follower posts has framed MCP as a failed experiment. The pattern is familiar: a useful protocol gets hyped, attracts bad implementations, and then gets blamed for them. So let’s be clear. “MCP is dead” is not a technical conclusion. It’s a vibe.

Image 5

The criticism is real. The conclusion is wrong.

Critics are pointing to real problems, but assigning blame too broadly. Context bloat is real. A badly designed GitHub MCP server can ship with 93 tool definitions and consume roughly 55,000 tokens before the user says anything. Add Jira, a database connector, and Confluence, and schema alone can push past 150,000 tokens. That is not abstract. It is a design failure that cuts directly into an agent’s reasoning budget.

Cloudflare’s numbers make the same point more sharply. Expose 2,500 API endpoints as MCP tools and you need around 244,000 tokens just to describe them. Their alternative, code generation against an already authorized client, can do the same work in roughly 1,000 tokens. Perplexity’s critique also holds: if tool schemas consume as much as 72% of the context window before user input arrives, the architecture is broken.

But none of this proves MCP is dead. It proves that dumping every possible tool into a flat MCP server and calling it an agent architecture is lazy engineering. The problem is not MCP. The problem is using it without designing for context cost.

The adoption curve says the opposite.

If MCP were actually dying, the adoption curve would look very different. Instead, the ecosystem shows rapid growth. MCP server downloads grew from roughly 100,000 in November 2024 to more than 8 million by April 2025. Anthropic reports 97 million monthly SDK downloads of official Python and TypeScript SDKs, and more than 10,000 public MCP servers were deployed by early 2026. Those are not the numbers of a protocol being abandoned. They are the numbers of a protocol moving into wider use.

The vendor landscape points the same way. OpenAI, Google, Microsoft, and AWS have all adopted MCP in some form, despite having every incentive to ignore a weak standard. In December 2025, Anthropic donated MCP to the Linux Foundation’s Agentic AI Foundation, making it vendor-neutral and community-governed. Gartner’s projection that by 2026, 75% of API gateway vendors and 50% of iPaaS vendors will ship MCP features only strengthens the case.

The “MCP is dead” discourse and the actual MCP ecosystem are not describing the same reality. One is reacting to bad implementations. The other is describing a standard becoming infrastructure.

MCP is not about efficiency. It’s about control.

Most of the debate collapses an important distinction: CLI beats MCP on efficiency, but MCP beats CLI on governance. Those are not competing claims. They answer different questions.

If an agent already knows what to call and is acting directly on your behalf, CLI-based tool use is often faster, cheaper, and easier to debug. For a solo developer running local tools in Cursor or a similar setup, CLI is often the better choice. In that context, MCP adds overhead without enough value to justify it.

A similar confusion shows up in the way people compare MCP to skills. They are not substitutes. Skills package reusable behaviors, instructions, and workflows inside the model layer. They help an agent know what to do and how to do it more consistently. MCP solves a different problem: it connects the model to external systems with permissions, auditability, and clear tool boundaries. Skills can make an agent smarter. MCP makes it possible for that agent to operate safely against real systems.

The equation changes once the agent stops acting as you and starts acting on behalf of other people. That is where enterprise begins. MCP gives you per-user OAuth, explicit tool boundaries, structured audit trails, and a governance surface that regulated environments need before they let agents touch production data. A CLI wrapper does not give you that. A custom integration layer can, but only if you rebuild the same governance machinery MCP is trying to standardize.

This is the standard hype-cycle mistake.

What is happening with MCP is not unusual. It is the same pattern that appears whenever a new standard matures in public.

REST was supposedly dead when GraphQL arrived. GraphQL was supposedly bloated when tRPC got hot. Kubernetes was supposedly too complex for any team that wasn’t Google-scale until it became standard infrastructure.

The script rarely changes. A standard emerges, people build naive versions of it, those versions expose real problems, critics confuse those problems with the standard itself, and the standard hardens anyway.

MCP is at exactly that stage. The weak implementations are being called out, and they should be. But criticism of immature implementations is not evidence that the protocol itself has failed. In fact, the opposite seems true. The 2026 MCP spec added asynchronous operations, statelessness improvements, and server identity features aimed directly at the production gaps critics identified.

That is not the behavior of a dead standard. It is the behavior of a standard being stress-tested, refined, and pushed toward production use.

The fix is better architecture, not abandonment.

If you are building with MCP and running into the problems critics describe, the answer is not to throw out the protocol. The answer is to design more carefully. Every tool costs tokens, so server surface area has to be treated as an architectural decision, not a convenience.

The real question is not “what can MCP expose?” but “what does this agent actually need?” Smaller, tighter tool surfaces usually perform better because they preserve reasoning budget and reduce ambiguity.

The same principle applies at the system level. You need to match the tool to the job. For local, single-developer, latency-sensitive workflows, CLI will often win. For multi-user, governed, audit-heavy systems, MCP makes more sense. In both cases, context budget has to be treated like any other architectural constraint.

The Harness MCP v2 example shows the difference between protocol failure and implementation maturity. Their first version exposed 130+ tools and created serious context bloat. The redesign cut that to 11, dropping tool-definition cost from roughly 26% of a 200K-token window to 1.6%. The protocol did not fail. The implementation got smarter. That is what maturity looks like.

MCP isn’t dead. It’s finally being used seriously.

MCP is not dead. It is post-hype, which is a healthier stage for any technology. Post-hype is when a tool stops getting credit for being new and starts getting judged on whether it is actually useful. That is a harder test, but also the only one that matters.

By that standard, MCP is holding up where it counts: in production, at scale, and in the enterprise environments where agents are most likely to matter.

A lot of people declaring MCP dead are reacting to naive implementations hitting predictable walls and drawing the wrong conclusion. Meanwhile, teams quietly deploying MCP in governed, well-scoped, multi-user systems are finding something less dramatic and more important: the protocol works when it is designed and deployed with care.

Bad implementations do not invalidate good protocols. They reveal where the engineering still needs work. That is not a eulogy. It is a build problem, and build problems get solved.

Originally published on Medium.