We based Xops (https://xops.net) on OpenRPC for this exact reason (disclosure: we are the OpenRPC founders). It requires defining the result schema, not just params, which helps plan how outputs connect to the any step's inputs. Feels necessary for building complex workflows and agents reliably.
And who will define the credentials? And what is the URL? Oh, those are in the environment variables? How will the LLM get that info? Do I need to prompt the LLM all that info, wasting context window on minutia that has nothing to do with my task?
…if only there was a standard for that… I know! Maybe it can provide a structured way for the LLM to call curl and handle all the messy auth stuff and smooth over the edges between operating systems and stuff. Perhaps it can even think ahead and load the OpenAPI schema and provide a structured way to navigate such a large “context blowing” document so the LLM doesn’t have to use precious context window figuring it out? But at that point why not just provide the LLM with pre-built wrappers on top specifically for whatever problem domain the rest api is dealing with?
Maybe we can call this protocol MCP?
Because think about it. OpenAPI doesn’t help the LLM actually reach out and talk to the API. It still needs a way to do that. Which is precisely what MCP does.
And who will define the credentials? The OpenAPI spec defines the credentials. MCP doesn't even allow for credentials, it seems, for now. But I don't think deleting a requirement is a good thing in this instance. I would like to have an API that I could reach from anywhere on the net and could secure with, for instance, an API key.
And what is the URL? You have to define this for MCP also. For instance, in Cursor, you have to manually enter the endpoint with a key named "url."
How will the LLM get that info? This was shown to be easily 1.5 years ago with GPT's easy understanding of the OpenAPI spec and its ability to use any endpoint on the net as a tool.
I don't disagree that there needs to be a framework for using endpoints. But why can't it reach out to an OpenAPI endpoint? What do we gain from using a new "protocol"? I created a couple of MCP servers, and it just feels like going back 10 years in progress for creating and documenting web APIs.
Let me ask you this in reverse then: Have you created a basic API and used it as a tool in a GPT? And have you created an MCP server and added it to applications on your computer? If you have done both and still feel that there is something better with MCP, please tell, because I found MCP to be solving an issue that didn't need solving.
Create an awesome framework for reaching out to Web APIs and read the OpenAPI definition of the endpoint? GREAT! Enforce a new Web API standard that is much less capable than what we already have? Not so great.
You seem to miss that an MCP server IS an HTTP server already. It's not just safe to expose it to the net and contains a new and limited spec for how to document and set it up.
LLM's are brains with no tools (no hands, legs, etc).
When we use tool calling we use them to empower the brain. But using normal API's the language model providers like OpenAI have no access to those tools.
With MCP they do. The brain they create can now have access to a lot of tools that the community builds directly _from_ llm, not through the apps.
This is here to make ChatGPT/Claude/etc _the gateway_ to AI rather than them just being API providers for other apps.
Normally we have a standard when we have applications, but I am not seeing these yet... perhaps I am blind and mad!
1 - Claude Desktop (and some more niche AI chat apps) - you can use MCPs to extend these chat systems today. I use a small number daily.
2 - Code Automation tools - they pretty much all have added MCP. Cursor, Claude Code, Cline, VSCode GH Codepilot, etc ...
3 - Agent/LLM automation frameworks. There are a ton of tools to build agentic apps and many support using MCP to to integrate third party APIs with limited to no boilerplate. And if there are are large libraries of every third party system you can imagine (like npm - but for APIs) then these are going to get used.
Still early days - but tons of real use, at least by the early adopter crowd. It isn't just a spec sitting on a shelf for all the many faults.
What are the applications at the level of Amazon.com, Expedia, or Hacker News?
And yeah in theory openapi can do it but not nearly as token efficient or user efficient. OpenAPI doesn’t help actually “connect” the LLM to anything, it’s not a tool itself but a spec. To use an OpenAPI compliant server you’d still need to tell the LLM how to authenticate, what the server address is, what tool needs to be used to call out (curl?) and even then you’d still need an affordance for the LLM to even make that call to curl. That “afforance” is exactly what MCP defines. It provides a structured way for the LLM to make tool calls.
> The protocol has a very LLM-friendly interface, but not always a human friendly one.
similar to the people asking "why not just use the API directly", I have another question: why not just use the CLI directly? LLMs are trained on natural language. CLIs are an extremely common solution for client/server interactions in a human-readable, human-writeable way (that can be easily traversed down subcommands)
for instance, instead of using the GitHub MCP server, why not just use the `gh` CLI? it's super easy to generate the help and feed it into the LLM, super easy to allow the user to inspect the command before running it, and already provides a sane exposure of the REST APIs. the human and the LLM can work in the same way, using the exact same interface
MCP is not a UI. Seem someone here quite confused about what is MCP.
MCP have no security? Someone don't know that stdio is secure and over SSE/HTTP there was already specs: https://modelcontextprotocol.io/specification/2025-03-26/bas....
MCP can run malicious code? Apply to any app you download. How this is the MCP issue? Happen in vscode extensions. NPM libs. But blame MCP.
MCP transmits unstructured text by design?
This is totally funny. It's the tool that decide what to respond. Annd the dialogue is quite
I start feeling this post is a troll.
I stopped reading and even worth continuing over prompt injection and so on.
If people are posting bad information or bad arguments, it's enough to respond with good information and good arguments. It's in your interests to do this too, because if you make them without swipes, your arguments will be more credible.
Moreover, the concept of good faith / bad faith refers to intent, and we can't know for sure what someone's intent was. So the whole idea of assessing someone else's good-faith level is doomed from the start.
Fortunately, there is a strategy that does work pretty well: assume good faith, and reply to bad information with correct information and bad arguments with better arguments. If the conversation stops being productive, then stop replying. Let the other person have the last word, if need be—it's no big deal, and in cases where they're particularly wrong, that last word is usually self-refuting.
Nobody is saying MCP is the only way to run malicious code, just that like VSCode extensions and NPM install scripts it has that problem.
I'm sure someone in the comments will say that inter-process communication requires auth (-‸ლ.
Rkt is better than Docker, later won.
${TBD} is better than MCP, my bet is on MCP.
Your experience with rkt is way different from mine. I would gladly accept "podman is..." or even "nerdctl is..." but I hate rkt so much and was thrilled when it disappeared from my life
I have to think the enthusiasm is coming mostly from the vibe-coding snakeoil salespeople that seem to be infecting every software company right now.
I can imagine a plugin-based server where the plugins are applications and AIs that all use MCP to interact. The server would add a discovery protocol.
That seems like the perfect use for MCP.