Skip to content
Go back

Is MCP Dead?

A few months ago, we went all-in on MCP. We called it the integration standard that marketing teams should pay attention to.

Since then, the backlash has been fierce.

Developers are ripping out MCP servers. Eric Holmes published “MCP is dead. Long live the CLI” — it hit the top of Hacker News. Pieter Levels declared MCP “just as useless of an idea as LLMs.txt.” Thoughtworks placed “naive API-to-MCP conversion” in the Hold ring of their Technology Radar.

So — were we wrong?

Table of contents

Open Table of contents

The case against MCP

The criticism is real and worth taking seriously. Let’s walk through the specific problems people are hitting.

Token overhead is brutal

Every MCP connection starts with tool discovery — the AI reads a manifest of every available tool, its description, parameters, and schema. This happens every session. A database MCP server with 106 tools consumed 54,600 tokens just to initialise. That’s before the AI does anything useful.

Research from the MCPGauge framework found that MCP context retrieval can inflate input-token budgets by up to 236x while frequently degrading accuracy. We wrote about this hidden token cost and the challenges of context management early on. But the problem has become more visible as teams scale from one or two MCP servers to ten or twenty. At that point, tool descriptions alone can consume 40% of the context window.

For comparison, a traditional API call is a single, stateless GET request. No discovery. No manifest. No session negotiation. One request, one response.

Session state adds complexity

MCP is stateful. It maintains connections, negotiates capabilities, and requires session management. This fights with load balancers, complicates horizontal scaling, and means your AI harness needs to manage connection lifecycle.

APIs are stateless. Fire and forget. The simplicity isn’t a limitation — it’s a feature.

Most MCP servers are just thin API wrappers

Here’s the uncomfortable truth: the majority of MCP servers in the wild today are thin wrappers around existing REST APIs. They take an API that already works and add a layer of protocol overhead on top. The MCP server calls the same endpoint you could call directly, but with more tokens, more latency, and more moving parts.

When the underlying value-add is just “the AI can discover this tool exists,” you have to ask whether the discovery cost is worth it. Thoughtworks made the same point in their Technology Radar, warning that granular API endpoints chained by AI agents lead to “excessive token usage, context pollution, and poor agent performance.”

LLMs already know how to use APIs

This is the argument that’s gained the most traction. Today’s large language models have enough world knowledge baked into their training data that they know how to use most popular APIs and CLI tools with minimal additional context. You don’t need to teach Claude how to call the GitHub API — it already knows.

Simon Willison has argued that agent skills — simple folders of Markdown files and optional scripts — may be more significant than MCP. His reasoning: “The simplicity is the point.” A well-structured SKILL.md file pointing to API documentation or CLI commands gives an agent everything it needs, without the protocol overhead.

Tools like Context7 take this further. Instead of wrapping every library in an MCP server, Context7 serves up-to-date, version-specific documentation directly into the AI’s prompt. The AI reads the docs and writes the integration code itself.

MCPorter goes the other direction — making it trivial to call existing MCP servers from TypeScript or CLI without boilerplate, effectively turning them into simple function calls.

The emerging alternative: code-first agents

The most provocative argument is that AI agents are better off writing their own integrations in code rather than going through MCP.

The logic: instead of loading a massive tool manifest and routing through a protocol layer, give the agent a brief description of what API or CLI to use, and let it write the integration code on the fly. This approach uses fewer tokens, produces more flexible integrations, and lets the agent adapt to any API without needing someone to build and maintain an MCP server for it.

As FlowHunt documented, code-first agents can reduce token consumption by up to 98% compared to MCP tool loading, while improving agent autonomy and performance.

Ship a good API and a good CLI. The agents will figure it out.

Eric Holmes, “MCP is dead. Long live the CLI”

The pattern that’s emerging is straightforward: well-described tool dictionaries and hierarchical skill files pointing to API and CLI commands are sufficient for most agent workflows. MCP becomes an unnecessary intermediary.

So is MCP actually dead?

No. But its role is narrowing.

The numbers tell a different story

The MCP ecosystem is, by the numbers, thriving. As of Q1 2026, there are over 17,000 MCP servers and 143,000 executable AI components indexed across major registries. The SDK has reached 97 million monthly downloads. OpenAI, Google, Microsoft, and AWS have all adopted MCP. The protocol is now stewarded by the Linux Foundation as an open standard.

That’s not a dying ecosystem. Glama’s analysis places MCP past the “Peak of Inflated Expectations” and entering the “Slope of Enlightenment” — real adoption is growing, but the initial gold rush of low-quality MCP servers has cooled. By mid-2025, there were roughly 25 MCP builders for every actual user. The oversupply is correcting, and what’s left is more production-ready.

Enterprise adoption is accelerating. Block reports 50-75% time savings. Bloomberg cut deployment timelines from days to minutes. These aren’t toy demos — they’re production systems handling real workloads.

Where MCP still wins

The critics are mostly right — for single-user, developer-led workflows. If you’re a hacker running Claude Code locally, piping CLI tools together, and comfortable writing integration code, you don’t need MCP. Skills and direct API calls will serve you better.

But that’s not how marketing teams work.

Multi-user environments need controls. When you’re trying to figure out how marketing, sales, engineering, and finance teams can all connect data sources to AI, you need access controls, audit trails, and permission management. MCP provides a framework for this. A folder of Markdown files doesn’t.

Non-technical users need abstraction. The “just let the AI write the integration code” approach assumes the user has a code execution environment, understands what’s happening, and can debug when things go wrong. Most marketers don’t. MCP’s abstraction — connect once, use everywhere — is exactly the right level for teams that aren’t writing code.

Vendor-maintained integrations matter. When HubSpot builds an official MCP server, they maintain it. When their API changes, they update the server. When Notion launches OAuth-based MCP, it just works. The alternative — having every team maintain their own API integration scripts — doesn’t scale for non-engineering organisations.

Standardisation has compounding value. The skills approach works brilliantly when one person is using one AI tool. But when a team of 15 is using Claude, ChatGPT, and Gemini across different workflows, a shared integration standard means you configure once and it works everywhere. That’s MCP’s real value proposition — and it hasn’t been replaced.

CLI-first agents represent the future of autonomous AI on personal and developer infrastructure. MCP represents the bridge to enterprise adoption for the 99% of organizations that cannot — and should not — hand an AI agent unrestricted system access.

Tobias Pfuetze

The real split

The emerging consensus — articulated well by Peter Kellner — is that skills teach the agent, and MCP lets the agent act. These aren’t competing approaches. They’re complementary layers.

The practical split looks like this:

If you’re a solo marketer running Claude Code and comfortable with the terminal, skills will get you further faster. If you’re leading a marketing team of 10 that needs to connect AI to your CRM, analytics, and ad platforms with proper access controls, MCP is still the answer.

What this means for marketing teams

Here’s the practical takeaway.

MCP isn’t dead, but it’s not everything we thought it was. The protocol has real overhead costs that matter as you scale. The context problem and token tax are genuine constraints, not theoretical concerns.

Skills are the bigger unlock for individual productivity. If you want to get more from AI today, building agent skills and investing in context engineering will deliver more immediate value than adding more MCP servers.

For team-wide AI adoption, MCP still matters. The moment you need multiple people connecting to the same tools with proper permissions, MCP’s overhead becomes a worthwhile trade-off. It’s infrastructure, and infrastructure always has a cost.

The “just use APIs” argument has limits. It works for technical users. It breaks down for marketing teams that need things to work reliably without understanding what’s happening under the hood. The agentic marketing future we’re building toward needs both skills and MCP — not one or the other.

Watch the 2026 roadmap. The MCP community is actively addressing these problems. Streamable HTTP improvements, .well-known metadata for server discovery (so tools don’t need to load entire manifests), and enterprise features like SSO and audit trails are all in progress. The protocol a year from now will look meaningfully different.

The bottom line

MCP isn’t dead. But the era of “just add MCP to everything” is over.

The smartest teams are using a layered approach: skills and context engineering for individual agent capability, MCP for team-wide tool access and governance, and direct API calls when neither abstraction adds enough value to justify the overhead.

The question isn’t whether to use MCP. It’s where MCP belongs in your stack — and where simpler alternatives will serve you better.

I really like the term “context engineering” over prompt engineering. It describes the core skill better: the art of providing all the context for the task to be plausibly solvable by the LLM.

Tobi Lutke, CEO of Shopify

The same principle applies here. Whether you provide context through MCP servers, agent skills, or direct API documentation, what matters is giving the AI the right information at the right time — with the least overhead possible.

Growth Method is the growth operating system for marketing teams focused on pipeline — not projects. Book a call to see how we can help accelerate your results.

“We are on-track to deliver a 43% increase in inbound leads this year. There is no doubt the adoption of Growth Method is the primary driver behind these results.”

Laura Perrott, Colt Technology Services


Back to top ↑