Top
Best
New

Posted by sshh12 4/13/2025

Everything wrong with MCP(blog.sshh.io)
516 points | 223 commentspage 3
superfreek 4/14/2025|
Great point in the article about tools lacking output schemas. Makes reliable multi-step planning tough.

We based Xops (https://xops.net) on OpenRPC for this exact reason (disclosure: we are the OpenRPC founders). It requires defining the result schema, not just params, which helps plan how outputs connect to the any step's inputs. Feels necessary for building complex workflows and agents reliably.

jtrn 4/14/2025||
What do we gain by using MCP that cloud not be achieved worth OpenAPI standard. It feels extremely redundant to me.
cruffle_duffle 4/14/2025||
What will the LLM use to call out to an OpenAPI system? Will it use a shell command like curl? How will it bind to the shell? How will the LLM’s host orchestrate that on the LLM’s behalf?

And who will define the credentials? And what is the URL? Oh, those are in the environment variables? How will the LLM get that info? Do I need to prompt the LLM all that info, wasting context window on minutia that has nothing to do with my task?

…if only there was a standard for that… I know! Maybe it can provide a structured way for the LLM to call curl and handle all the messy auth stuff and smooth over the edges between operating systems and stuff. Perhaps it can even think ahead and load the OpenAPI schema and provide a structured way to navigate such a large “context blowing” document so the LLM doesn’t have to use precious context window figuring it out? But at that point why not just provide the LLM with pre-built wrappers on top specifically for whatever problem domain the rest api is dealing with?

Maybe we can call this protocol MCP?

Because think about it. OpenAPI doesn’t help the LLM actually reach out and talk to the API. It still needs a way to do that. Which is precisely what MCP does.

jtrn 4/14/2025||
So why do I have to make an MCP server? Could it not just hook into the OpenAPI JSON spec?

And who will define the credentials? The OpenAPI spec defines the credentials. MCP doesn't even allow for credentials, it seems, for now. But I don't think deleting a requirement is a good thing in this instance. I would like to have an API that I could reach from anywhere on the net and could secure with, for instance, an API key.

And what is the URL? You have to define this for MCP also. For instance, in Cursor, you have to manually enter the endpoint with a key named "url."

How will the LLM get that info? This was shown to be easily 1.5 years ago with GPT's easy understanding of the OpenAPI spec and its ability to use any endpoint on the net as a tool.

I don't disagree that there needs to be a framework for using endpoints. But why can't it reach out to an OpenAPI endpoint? What do we gain from using a new "protocol"? I created a couple of MCP servers, and it just feels like going back 10 years in progress for creating and documenting web APIs.

Let me ask you this in reverse then: Have you created a basic API and used it as a tool in a GPT? And have you created an MCP server and added it to applications on your computer? If you have done both and still feel that there is something better with MCP, please tell, because I found MCP to be solving an issue that didn't need solving.

Create an awesome framework for reaching out to Web APIs and read the OpenAPI definition of the endpoint? GREAT! Enforce a new Web API standard that is much less capable than what we already have? Not so great.

You seem to miss that an MCP server IS an HTTP server already. It's not just safe to expose it to the net and contains a new and limited spec for how to document and set it up.

emilsedgh 4/14/2025||
The biggest advantage of this is for the llm providers like OAI not application developers.

LLM's are brains with no tools (no hands, legs, etc).

When we use tool calling we use them to empower the brain. But using normal API's the language model providers like OpenAI have no access to those tools.

With MCP they do. The brain they create can now have access to a lot of tools that the community builds directly _from_ llm, not through the apps.

This is here to make ChatGPT/Claude/etc _the gateway_ to AI rather than them just being API providers for other apps.

israrkhan 4/14/2025||
I feel MCP emerged too quickly. It will take some time and further protocol versions/enhancements for it to mature, and address stuff like security. Also it was not vetted by a conventional standards body, but just made public by a company. Lots of hype around it, everyone seems to be talking MCP these days.
sgt101 4/14/2025|
This is a key point - where are the applications?

Normally we have a standard when we have applications, but I am not seeing these yet... perhaps I am blind and mad!

jacobr1 4/14/2025|||
Three big places:

1 - Claude Desktop (and some more niche AI chat apps) - you can use MCPs to extend these chat systems today. I use a small number daily.

2 - Code Automation tools - they pretty much all have added MCP. Cursor, Claude Code, Cline, VSCode GH Codepilot, etc ...

3 - Agent/LLM automation frameworks. There are a ton of tools to build agentic apps and many support using MCP to to integrate third party APIs with limited to no boilerplate. And if there are are large libraries of every third party system you can imagine (like npm - but for APIs) then these are going to get used.

Still early days - but tons of real use, at least by the early adopter crowd. It isn't just a spec sitting on a shelf for all the many faults.

sgt101 4/14/2025||
These examples seem to be tools built on this tool - which is cool and all, but it's not the equivalent of "a mail order catalogue you can access through your computer", or "a replacement for a travel agent".

What are the applications at the level of Amazon.com, Expedia, or Hacker News?

cruffle_duffle 4/14/2025|||
They are little things like “hey I want to get the LLM to fetch logs from elasticsearch” or “hey I want to create a way for the LLM to query my database in a more structured way”.

And yeah in theory openapi can do it but not nearly as token efficient or user efficient. OpenAPI doesn’t help actually “connect” the LLM to anything, it’s not a tool itself but a spec. To use an OpenAPI compliant server you’d still need to tell the LLM how to authenticate, what the server address is, what tool needs to be used to call out (curl?) and even then you’d still need an affordance for the LLM to even make that call to curl. That “afforance” is exactly what MCP defines. It provides a structured way for the LLM to make tool calls.

mehdibl 4/14/2025||
MCP have a BAD UI?

MCP is not a UI. Seem someone here quite confused about what is MCP.

MCP have no security? Someone don't know that stdio is secure and over SSE/HTTP there was already specs: https://modelcontextprotocol.io/specification/2025-03-26/bas....

MCP can run malicious code? Apply to any app you download. How this is the MCP issue? Happen in vscode extensions. NPM libs. But blame MCP.

MCP transmits unstructured text by design?

This is totally funny. It's the tool that decide what to respond. Annd the dialogue is quite

I start feeling this post is a troll.

I stopped reading and even worth continuing over prompt injection and so on.

ramesh31 4/14/2025||
MCP is absolutely a UI. It's just that the "user" is an LLM agent. Properly defining that interface is the main crucial piece of developing any tool.
mehdibl 4/14/2025||
OK the HTTP is a UI. Seriously, these comment are trolling.
dang 4/14/2025||
Please don't resort to accusing others of trolling, or of telling them they didn't read something (https://news.ycombinator.com/item?id=43677540). These are swipes, which the HN guidelines ask you to edit out of your posts here: https://news.ycombinator.com/newsguidelines.html.

If people are posting bad information or bad arguments, it's enough to respond with good information and good arguments. It's in your interests to do this too, because if you make them without swipes, your arguments will be more credible.

alternatex 4/14/2025||
We have to draw some line on good faith vs bad faith arguments though. Not understanding the difference between a UI and API is a stretch and purposefully conflating them just to win a semantic argument is not productive.
dang 4/14/2025||
The problem is that internet readers are far, far too prone to classify others as being in bad faith, so in practice, "drawing the line" usually amounts to a provocation. This bias is so strong that I don't think people can be persuaded to draw that line more accurately.

Moreover, the concept of good faith / bad faith refers to intent, and we can't know for sure what someone's intent was. So the whole idea of assessing someone else's good-faith level is doomed from the start.

Fortunately, there is a strategy that does work pretty well: assume good faith, and reply to bad information with correct information and bad arguments with better arguments. If the conversation stops being productive, then stop replying. Let the other person have the last word, if need be—it's no big deal, and in cases where they're particularly wrong, that last word is usually self-refuting.

mcintyre1994 4/14/2025|||
> MCP can run malicious code? Apply to any app you download. How this is the MCP issue? Happen in vscode extensions. NPM libs. But blame MCP.

Nobody is saying MCP is the only way to run malicious code, just that like VSCode extensions and NPM install scripts it has that problem.

throw1290381290 4/14/2025||
> Someone don't know that stdio is secure

I'm sure someone in the comments will say that inter-process communication requires auth (-‸ლ.

frogsRnice 4/14/2025||
It absolutely does
Havoc 4/14/2025||
Much like the early days of JS and APIs in general this is presumably going to need time to evolve and settle.
codydkdc 4/14/2025||
the only one of these points I personally care about is:

> The protocol has a very LLM-friendly interface, but not always a human friendly one.

similar to the people asking "why not just use the API directly", I have another question: why not just use the CLI directly? LLMs are trained on natural language. CLIs are an extremely common solution for client/server interactions in a human-readable, human-writeable way (that can be easily traversed down subcommands)

for instance, instead of using the GitHub MCP server, why not just use the `gh` CLI? it's super easy to generate the help and feed it into the LLM, super easy to allow the user to inspect the command before running it, and already provides a sane exposure of the REST APIs. the human and the LLM can work in the same way, using the exact same interface

jacobr1 4/14/2025|
FWIW - that is what claude code does, at least how I use it. It uses the BashTool to call `gh`
RKFADU_UOFCCLEL 4/14/2025||
I don't see the problem. I've just been using services with 3 - 10 verbs that the agent can use without adding any that can do harmful things. Even when they have no auth mechanism I can just put a temporary token in the URL.
a3w 4/14/2025||
BetaMax is better than VHS, the latter won.

Rkt is better than Docker, later won.

${TBD} is better than MCP, my bet is on MCP.

mdaniel 4/14/2025||
> Rkt is better than Docker, later won.

Your experience with rkt is way different from mine. I would gladly accept "podman is..." or even "nerdctl is..." but I hate rkt so much and was thrilled when it disappeared from my life

idonotknowwhy 4/15/2025||
BetaMax had better fidelity than VHS, but initially could only hold 60 minutes of A/V while VHS could do 120m (ie, almost a full movie without swapping tapes)
jappgar 4/14/2025||
Amazing to see so many comments out here in support of a very new and very boring protocol.

I have to think the enthusiasm is coming mostly from the vibe-coding snakeoil salespeople that seem to be infecting every software company right now.

cruffle_duffle 4/14/2025|
Alternatively it’s actually pretty cool and useful and you just personally don’t have a use for it?
otabdeveloper4 4/15/2025||
It's not "pretty cool", it's a shitty reimplementation of the OpenAPI spec made by NIH syndrome idiots.
emorning3 4/14/2025|
Anybody remember OSGi?

I can imagine a plugin-based server where the plugins are applications and AIs that all use MCP to interact. The server would add a discovery protocol.

That seems like the perfect use for MCP.

More comments...