There are a lot of great recs in the docs but I wrote this based on what I actually saw in the wild. I definitely don't think it's all on the spec to solve these.
I saw a lot of articles since MCP was buzzing same claims copy & paste. And the post show a lot of confusion for what MCP is and MCP do.
MCP is a standard for describing and exposing tools to LLM applications.
So they're not the same category of thing. Langchain could implement aspects of the MCP specification, in fact it looks like they're doing that already: https://github.com/langchain-ai/langchain-mcp-adapters
Related: > Tool-Calling - If you’re like me, when you first saw MCP you were wondering “isn’t that just tool-calling?”...
Not everyone uses langchain nor does langchain cover some of the lower level aspects of actually connecting things up. MCP just helps standardize some of those details so any assistant/integration combo is compatible.
Edit: +1 simonw above
There's a whole section on how people can do things like analyse a combination of slack messages, and how they might use that information. This is more of an argument suggesting agents are dangerous. You can think MCP is a good spec that lets you create dangerous things but conflating these arguments under "mcp bad" is disingenuous.
Id rather have more details and examples on the problem with the spec itself. "You can use it to do bad things" doesn't cut it. I can use http and ssh to bad things too, so it's more interesting to show how Eve might use MCP to do malicious things to Alice or Bob who are trying to use MCP as intended.
No, it's not fair at all. You can't add security afterwards like spreading icing on baked cake. If you forgot to add sugar to the cake batter, there's not enough buttercream in the world to fix it.
There is no need to implement a new form of authentication that's specific to the protocol because you already have a myriad of options available with HTTP.
Any form of auth used to secure a web service can be used with MCP. It's no different than adding authN to a REST API.
Please just read the spec. It just builds on top of JSON-RPC, there's nothing special or inherently new about this protocol.
https://modelcontextprotocol.io/specification/2025-03-26
There are way too many commentators like yourself that have no idea what they are talking about because they couldn't be bothered to RTFM.
It's not no security vs security but not standardized vs standardized.
Agree though that's it's not ideal and there will definitely be non zero harm from that decision.
I would think authZ is the trickier unhandled part of MCP as I don’t remember any primitives for authorization denial or granularity, but, again, HTTP provides a coarse authZ exchange protocol.
You can use whatever authN/authZ you want for the HTTP transport. It's entirely up to the client and server implementers.
But when you are running things in any kind of multi-user/multi-tenant scenario, this is much harder and the protocol doesn't really address this (though also doesn't prevent us from layering something on top). As a dumb (but real) example, I don't want a web-enabled version of of an MCP plugin to have access to my company's google drive and expose that to all our chat users. That would bypass the RBAC we have. Also I don't want to bake that in at the level of the tool calls, as that can be injected. I need some side channel information on the session to have the client and server to manage that.
The only upside to these technologies being shotgun implemented and promoted is that they'll inevitably lead to a failure that can't be pushed under the rug (and will irreversibly damage the credibility of AI usage in business).
It appears Anthropic developed this "standard" in a vacuum with no scrutiny or review and it turns out it's riddled with horrific security issues, ignored by those hyping up the "standard" for more VC money.
Reminds me of the micro-services hype, which that helps the big cloud providers more than it helps startups with less money even with some over-doing it and being left with enormous amount of technical debt and complex diagrams costing them millions to run.
> We are so back
MCP calls itself a “protocol,” but let’s be honest—it’s a framework description wrapped in protocol cosplay. Real protocols define message formats and transmission semantics across transport layers. JSON-RPC, for example, is dead simple, dead portable, and works no matter who implements it. MCP, on the other hand, bundles prompt templates, session logic, SDK-specific behaviors, and application conventions—all under the same umbrella.
As an example, I evidently need to install something called "uv", using a piped script pulled in from the Internet, to "run" the tool, which is done by putting this into a config file for Claude Desktop (which then completely hosed my Claude Desktop):
{
"mcpServers": {
"weather": {
"command": "uv",
"args": [
"run",
"--with",
"fastmcp",
"fastmcp",
"run",
"C:\\Users\\kord\\Code\\mcptest\\weather.py"
]
}
}
}
They (the exuberant authors) do mention transport—stdio and HTTP with SSE—but that just highlights the confusion here we are seeing. A real protocol doesn’t care how it’s transported, or it defines the transport clearly. MCP tries to do both and ends up muddying the boundaries. And the auth situation? It waves toward OAuth2.1, but offers almost zero clarity on implementation, trust delegation, or actual enforcement. It’s a rats nest waiting to unravel once people start pushing for real-world deployments that involve state, identity, or external APIs with rate limits and abuse vectors.This feels like yet another centralized spec written for one ecosystem (TypeScript AI crap), claiming universality without earning it.
And let’s talk about streaming vs formatting while we’re at it. MCP handwaves over the reality that content coming in from a stream (like SSE) has totally different requirements than a local response. When you’re streaming partials from a model and interleaving tool calls, you need a very well-defined contract for how to chunk, format, and parse responses—especially when tools return mid-stream or you’re trying to do anything interactive.
Right now, only a few clients are actually supported (Anthropic’s Claude, Copilot, OpenAI, and a couple local LLM projects). But that’s not a bug—it’s the feature. The clients are where the value capture is. If you can enforce that tools, prompts, and context management only work smoothly inside your shell, you keep devs and users corralled inside your experience. This isn’t open protocol territory; it’s marketing. Dev marketing dressed up as protocol design. Give them a “standard” so they don’t ask questions, then upsell them on hosted toolchains, orchestrators, and AI-native IDEs later. The LLM is the bait. The client is the business.
And yes, Claude helped write this, but it's exactly what I would say if I had an hour to type it out clearly.