Posted by gronky_ 3/26/2025
But is there a market for stand-alone paid MCP services? It seems these will mostly be usurped by the models themselves sooner or later. I mean if you create a MCP popular enough to actually make money, the foundation model will soon be able to just do it without your service. Almost like you are doing experimentation on high-value agent features for free.
Also, something about the format just reeks of SOAP to me. It feels over-engineered. Time will tell, obviously.
https://www.reddit.com/r/ClaudeAI/comments/1hcrxl6/cline_can...
It feels like any kind of api-client work any organization or project would build to make users/customers happy and sticky.
But there is hype around MCP as if independent devs could do something with it. And some will just for fun, some will for open source cred. But it is my experience that longevity is usually the result of stable revenue.
I guess what I predict happening here is a few people will build some useful MCPs, realize there is no way to monetize despite generating a lot of interest and the most useful/popular MCPs will be integrated directly into the offerings of the foundational AI companies. And in a few years we won't even remember the acronym unless you happen to work for some big corp that wants to integrate your existing service into LLMs.
I feel like software license and business revenue models are only very loosely correlated. I guess if you are assuming there is no money at all to be made in MCP then my question wouldn't make sense.
2. it standardizes the method so every LLM doesn't need to do it differently
3. it creates a space for further shared development beyond tool use and discovery
4. it begins to open up hosted tool usage across LLMs for publicly hosted tools
5. for better or worse, it continues to drive the opinion that 'everything is a tool' so that even more functionality like memory and web searching can be developed across different LLMs
6. it offers a standard way to set up persistent connections to things like databases instead of handling them ad-hoc inside of each LLM or library
If you are looking for anything more, you won't find it. This just standardizes the existing tool use / function calling concept while adding minimal overhead. People shouldn't be booing this so much, but nor should they be dramatically cheering it.
I think most of the hype around MCP is just excitement that tool use can actually work and seeing lots of little examples where it does.
Watching Claude build something in Blender was pure magic, even if it is rough around the edges.
Just like an app store, a chrome extension store, we can have a LLM tools store.
I thought they ran locally only, so how would the OpenAI API connect to them when handing a request?
There are a variety of available clients documented here https://modelcontextprotocol.io/clients
If you haven't tried any of these yet, the first place to start is Claude Desktop. If you'd like to write your own agents, consider https://github.com/evalstate/fast-agent
EDIT: I may have misunderstood your question. If you're asking "how can I make an API call to OpenAI, and have OpenAI call an MCP server I'm running as part of generating its response to me", the answer is "you can't". You'll want a proxy API that you call which is actually an MCP client, responsible for coordinating between the MCP servers and the OpenAI API upstream agent.
https://blog.cloudflare.com/remote-model-context-protocol-se...
I'm wondering though about progress notifications and pagination. Especially the latter should be supported as otherwise some servers might not return the full list of tools. Has anyone tested this?
https://tron.fandom.com/wiki/Master_Control_Program
Don't hook up the MCP to any lab equipment:
By writing mcp servers for our services/apps we are allowing a standardized way for ai assistants to integrate with tools and services across apps.