Top
Best
New

Posted by gronky_ 3/26/2025

OpenAI adds MCP support to Agents SDK(openai.github.io)
807 points | 267 commentspage 3
zoogeny 3/26/2025|
I'm curious what the revenue plan is for MCP authors. I mean, I can see wanting to add support for existing products (like an code/text editor, image/sound/video editor, etc.)

But is there a market for stand-alone paid MCP services? It seems these will mostly be usurped by the models themselves sooner or later. I mean if you create a MCP popular enough to actually make money, the foundation model will soon be able to just do it without your service. Almost like you are doing experimentation on high-value agent features for free.

Also, something about the format just reeks of SOAP to me. It feels over-engineered. Time will tell, obviously.

NicuCalcea 3/26/2025||
Most MCP servers are thin wrappers around an API, I don't think there will be a big paid market. I imagine they will be released like SDKs are now, either by companies themselves, or by individuals when there's no official implementation. A few dev agents can even write MCP servers for themselves.

https://www.reddit.com/r/ClaudeAI/comments/1hcrxl6/cline_can...

amerine 3/26/2025|||
Not asked contentiously, why does a MCP revenue plan need to exist?

It feels like any kind of api-client work any organization or project would build to make users/customers happy and sticky.

zoogeny 3/26/2025||
Right, that is why I said it makes sense for existing products. I mentioned desktop apps, but it would apply to any existing project. Like JIRA, or HubSpot, Wordpress, or Figma, etc.

But there is hype around MCP as if independent devs could do something with it. And some will just for fun, some will for open source cred. But it is my experience that longevity is usually the result of stable revenue.

I guess what I predict happening here is a few people will build some useful MCPs, realize there is no way to monetize despite generating a lot of interest and the most useful/popular MCPs will be integrated directly into the offerings of the foundational AI companies. And in a few years we won't even remember the acronym unless you happen to work for some big corp that wants to integrate your existing service into LLMs.

imtringued 3/27/2025||
The revenue plan for hosted services is that every user with an MCP client will count as a billable user.
victorbjorklund 3/26/2025||
What is the revenue plan for 99% of open source library authors?
achierius 3/26/2025|||
The currently-dominant open source paradigm (i.e. motivated and well-meaning developers work themselves to the bone under pressure from tech giants who will never pay them) is clearly not sustainable. Moreover, many are probably not happy with their code being used (without permission) to train machines whose putative intent is to replace those self-same developers. Now the question is, how many will want to do something similar, again?
zoogeny 3/26/2025|||
Is what you are insinuating that the only licensing model for independent MCP authors is open source? Are you assuming that MCPs are only viable as free (as in beer) projects?

I feel like software license and business revenue models are only very loosely correlated. I guess if you are assuming there is no money at all to be made in MCP then my question wouldn't make sense.

izwasm 3/26/2025||
MCP is great. But what i'd like to understand is whats the difference between MCP and manually prompting the model a list of tools with description and calling the specific function based on the llm response ?
aurumque 3/26/2025||
1. it makes tool discovery and use happen elsewhere so programs become more portable

2. it standardizes the method so every LLM doesn't need to do it differently

3. it creates a space for further shared development beyond tool use and discovery

4. it begins to open up hosted tool usage across LLMs for publicly hosted tools

5. for better or worse, it continues to drive the opinion that 'everything is a tool' so that even more functionality like memory and web searching can be developed across different LLMs

6. it offers a standard way to set up persistent connections to things like databases instead of handling them ad-hoc inside of each LLM or library

If you are looking for anything more, you won't find it. This just standardizes the existing tool use / function calling concept while adding minimal overhead. People shouldn't be booing this so much, but nor should they be dramatically cheering it.

philomath_mn 3/27/2025||
Great breakdown, appreciate it.

I think most of the hype around MCP is just excitement that tool use can actually work and seeing lots of little examples where it does.

Watching Claude build something in Blender was pure magic, even if it is rough around the edges.

mercenario 3/28/2025|||
You can build all the tools yourself, or you can just go to a "tools store", install it and use it. MCP is just the standard everyone can use to build, share and use these tools.

Just like an app store, a chrome extension store, we can have a LLM tools store.

victorbjorklund 3/26/2025||
None. Kind of like the difference of using a REST api or inventing your own api format. Both will work. One is standard.
esafak 3/27/2025||
Where are the community-created server APIs for your format? Why would you re-invent the wheel and rewrite them all yourself?
jswny 3/26/2025||
Does anyone know how MCP servers would be used via the API?

I thought they ran locally only, so how would the OpenAI API connect to them when handing a request?

peterldowns 3/26/2025||
You'd use a client which runs locally to coordinate between the LLM/agent and the available tools, similarly to how it's described here https://modelcontextprotocol.io/quickstart/client

There are a variety of available clients documented here https://modelcontextprotocol.io/clients

If you haven't tried any of these yet, the first place to start is Claude Desktop. If you'd like to write your own agents, consider https://github.com/evalstate/fast-agent

EDIT: I may have misunderstood your question. If you're asking "how can I make an API call to OpenAI, and have OpenAI call an MCP server I'm running as part of generating its response to me", the answer is "you can't". You'll want a proxy API that you call which is actually an MCP client, responsible for coordinating between the MCP servers and the OpenAI API upstream agent.

knowaveragejoe 3/27/2025||
You can run remote MCP servers and configure whatever client to use them. This should work even via OpenAI's API(perhaps not yet, but it's just another 'tool' to call)

https://blog.cloudflare.com/remote-model-context-protocol-se...

bibryam 3/26/2025||
command mode is for local, and SSE for remote
cruffle_duffle 3/26/2025||
Was wondering if this would ever happen. I wrote an MCP server that hooked up Azure Monitor (or whatever the hell microsoft is calling it) via Microsoft's python SDK so I could get it to query our logs without using command line tools. Took about half a day, mostly due to writing against the wrong Microsoft SDK. It will be nice to let ChatGPT have a crack a this too!
esafak 3/26/2025||
That makes it table stakes for any agent framework.
ondrsh 3/26/2025||
This seems to be just implementing tools functionality, no resources or prompts, roots or sampling. I can't blame them.

I'm wondering though about progress notifications and pagination. Especially the latter should be supported as otherwise some servers might not return the full list of tools. Has anyone tested this?

ginko 3/26/2025||
Master Control Program?
neilv 3/27/2025||
Someone at Anthropic might've had a sense of humor when they named integrating LLMs with the outside world... after a pop-culture evil AI.

https://tron.fandom.com/wiki/Master_Control_Program

Don't hook up the MCP to any lab equipment:

https://www.youtube.com/watch?v=lAcYUt2QbAo

jtimdwyer 3/26/2025||
Who does he calculate he is?!?
polishdude20 3/27/2025||
Ideally in the future we won't need an MCP server when the AI can just write Unix terminal code to do anything it needs to get the job done? It seems using an MCP server and having the AI know about its "tools" is more of training wheels approach.
analyte123 3/27/2025||
If you have a clean, well-documented API that can be understood in under 30 minutes by a decent software engineer, congrats: you are MCP ready. I wonder how many discussions there will be about "adding MCP support" to software without this prerequisite.
ursaguild 3/27/2025|
The real benefit I see from mcp is that we are now writing programs for users and ai assistants/agents.

By writing mcp servers for our services/apps we are allowing a standardized way for ai assistants to integrate with tools and services across apps.

More comments...