Posted by hsanthan 2 days ago
In general, the only way to make sure MCPs are safe is to limit which connections are made in an enterprise setting
It would be silly to provide every employee access to GitHub, regardless of whether they need it. It’s just distracting and unnecessary risk. Yet people are over-provisioning MCPs like you would install apps on a phone.
Principle of least access applies here just as it does anywhere else.
I tried and failed after about 3 days of dealing with AI-slop-generated nonsense that has _never_ been worked. The MCP spec was created probably by brainless AI agents, so it defines two authentication methods: no authentication whatsoever, and OAuth that requires bleeding-edge features (dynamic client registration) not implemented by Google or Microsoft.
The easiest way for that right now is to ask users to download a random NodeJS package that runs locally on their machines with minimal confinement.
I think the only difference is the statefulness of the request. HTTP is stateless, but MCP has state? Is this right?
I haven’t seen many use cases for how to use the state effectively, but I thought that was the main difference over a plain REST API.
MCP can use SSE to support notifications (since the protocol embeds a lot of state, you need to be able to tell the client that the state has changed), elicitation (the MCP server asking the user to provide some additional information to complete a tool call) and will likely use it to support long-running tool calls.
Many of these features have unfortunately been specified in the protocol before clear needs for them have been described in detail, and before other alternative approaches to solving the same problems were considered.
https://blog.fka.dev/blog/2025-06-06-why-mcp-deprecated-sse-...
There are a million "why don't you _just_ X?" hypothetical responses to all the real issues people have with streamable http as implemented in the spec, but you can't argue your way into a level of ecosystem support that doesn't exist. The exact same screwup with oAuth too, so we can see who is running the show and how they think.
It's hard to tell if there is some material business plan Anthropic has with these changes or if the people in charge of defining the spec are just kind of out of touch, have non-technical bosses, and have managed to politically disincentivize other engineers from pointing out basic realities.
You can debate all day whether bringing your own tools is a good thing vs giving the LLM a generic shell tool and an API doc and letting it run curls. I like tools because it brings reproducibility.
MCP is really just a json RPC spec. json RPC can take place over a variety of transports and under a variety of auth mechanisms- MCP doesn't need to spec a transport or auth mechanism.
I totally agree with everybody that most MCP clients are half assed and remote MCP is not well supported, but that's a business problem
Every LLM tool today either runs locally (cursor, zed, IDEs, etc.) so can run MCP servers as local processses w/ no auth, or is run by an LLM provider where interoperability is not a business priority. So the remote MCP story has not been fleshed out
https://github.com/modelcontextprotocol/modelcontextprotocol... https://github.com/modelcontextprotocol/modelcontextprotocol... https://github.com/modelcontextprotocol/modelcontextprotocol... https://aaronparecki.com/2025/05/12/27/enterprise-ready-mcp https://github.com/modelcontextprotocol/modelcontextprotocol... https://www.okta.com/newsroom/press-releases/okta-introduces... https://github.com/modelcontextprotocol/ext-auth/blob/main/s... https://github.com/modelcontextprotocol/modelcontextprotocol... https://github.com/modelcontextprotocol/modelcontextprotocol... https://github.com/modelcontextprotocol/modelcontextprotocol...
https://news.ycombinator.com/item?id=45022004 (Aug 2025)
https://news.ycombinator.com/item?id=44396920 (June 2025)
https://news.ycombinator.com/item?id=43028401 (Feb 2025)
https://news.ycombinator.com/item?id=39337678 (Feb 2024)
You've obviously got some substantive points to make here, which is great, but indignant putdown rhetoric has a destructive effect on the threads. If you could just make your substantive points thoughtfully, we'd appreciate it.
However, it IS also a description of the current state of affairs in the MCP land. The comment threads and proposals in the MCP projects are dominated by the LLM-generated text, so it is almost impossible to keep the full picture in one's mind. LLMs made it possible to create an overwhelming amount of activity with ease.
Moreover, a lot of _code_ for the MCP servers is also AI-generated and has never been used in practice. It's easy to verify. Here are Github search results for the ProxyOAuthServerProvider that is supposed to delegate the authentication to a third-party server: https://github.com/search?q=ProxyOAuthServerProvider&type=co...
There are 215 results at the time of writing, and all but 3 of them are either forks of or LLM-fueled rewrites of the same code from the `modelcontextprotocol` repo. And one of the 3 is mine, and it doesn't quite work.
So yes, "It's just more of AI slop". Sorry. That's just a neutral description of the current state of affairs in the MCP/AI world. And yes, it's absolutely horrifying.
(Btw, that makes your comment a case of the so-called rebound effect (https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...) - one of the more interesting phenomena we've noticed.)
* I mean qua moderator of course. Qua reader, I have no opinion.
> bit beyond the pale
Uhm... Why? It's an honest question. I thought that "horrifying" (as in "inducing horror") is a normal descriptor, not racially/sexually loaded or anything. The AI situation certainly induces real dread in me.
This is quite minor though; if it distracts from the main point, I'd say forget it.
https://modelcontextprotocol.io/specification/2025-06-18/ser...
I'm guessing it has a the same shape as a normal message + IsError so on the handling side you don't really have to do anything special to handle it, just proceed as normal and send the results to the LLM so it can correct if needed.
I'd love to hear more about the specific issues you're running into with the new version of the spec. (disclaimer - I work at an auth company! email in bio if you wanna chat)
So far, I was not able to do it. And there are no examples that I can find. It's also all complicated by the total lack of logs from ChatGPT detailing the errors.
I'll probably get there eventually and publish a blog...
Is it possible for the customer to provide their own bearer tokens (generated however) that the LLM can pass along to the MCP server? This was the closest to a workable security I’ve looked at. I don’t know if that is all that well supported by Chat GUI/web clients (user supplied tokens), but should be possible when calling an LLM through an API style call, right (if you add additional pass thru headers)?
In general, I’d say it’s not a good idea to pass bearer tokens to the LLM provider and keep that to the MCP client. But your client has to be interoperable with the MCP server at the auth level, which is flakey at the moment across the ecosystem of generic MCP clients and servers as noted.
Nope. I assumed as much and even implemented the bearer token authentication in the MCP server that I wanted to expose.
Then I tried to connect it to ChatGPT, and it turns out to NOT be supported at all. Your options are either no authentication whatsoever or OAuth with dynamic client registration. Claude at least allows the static OAuth registration (you supply client_id and client_secret).
Metadata and resource indicators are solving the rest of the problems that came with the change to OAuth spec.
The LLMs are also really bad at generating correct code for OAuth logic - there are too many conditions there, and the DCR dance is fairly complicated to get right.
Shameless plug: we're building a MCP gateway that takes in any MCP server and we do the heavy lifting to make it compatible with any client on the other end (Claude, ChatGPT - even with custom actions); as a nice bonus it gives you SSO/logs as well. https://mintmcp.com if you're interested.
Have you considered adding a stand-alone service? Perhaps using the AGPL+commercial license.
Even if you're doing local only - MCP tools can mostly be covered by simply asking Claude Code (or whatever) to use the bash equivalent.
In other words, downloading random crap that runs unconfined and requires a shitty app like Claude Desktop.
BTW, Claude Desktop is ALSO an example of AI slop. It barely works, constantly just closing chats, taking 10 seconds to switch between conversations, etc.
In my case, I wanted to connect our CRM with ChatGPT and ask it to organize our customer notes. And also make this available as a service to users, so they won't have to be AI experts with subscriptions to Claude.
I didn't even know want an MCP server was until I noticed the annoying category in VSCode Extensions panel today. Only able to get rid of it by turning off a broad AI related flag in settings (fine by me, wish I knew it was there earlier). An hour later, I'm seeing this.
ALAN
It's called Tron. It's a security
program itself, actually. Monitors
all the contacts between our system
and other systems... If it finds
anything going on that's not scheduled,
it shuts it down. I sent you a memo
on it.
DILLINGER
Mmm. Part of the Master Control Program?
ALAN
No, it'll run independently.
It can watchdog the MCP as well.
DILLINGER
Ah. Sounds good. Well, we should have
you running again in a couple of days,
I hope.Our research lab discovered this novel threat back in July: https://invariantlabs.ai/blog/toxic-flow-analysis and built the tooling around it. This is an extremely common type of issue that many people don't realize (basically, when you are using multiple MCP servers that individually are safe, but together can cause issues).
This org has gone to some dubious lengths to make a name for themselves, including submitting backdoored packages to public npm repos which would exfiltrate your data and send to a Synk-controlled C&C. This included the environment, which would be sending them your username along with any envvars like git/aws/etc auth tokens.
This might give them some credibility in this space, maybe they stand a decent chance of scanning MCPs for backdoors based on their own experience in placing malicious code on other people's systems.