Posted by meetpateltech 3 days ago
I give up.
I don't see any debugging features yet
but I found an example implementation in the docs:
https://community.openai.com/t/error-oauth-step-when-connect...
Something went wrong with setting up the connection
In the devtools, the request that failed was to `https://chatgpt.com/backend-api/aip/connectors/links/oauth/c...` which send this reply: Token exchange failed: 401, message='Unauthorized', url=URL('https://api.mapbox.com/oauth/access_token')
our MCP also works fine with Claude, Claude Code, Amp, lm studio and other but not all MCP clients
MCP spec and client implementations are a bit tricky when you're not using FastMCP (which we are not).
Ours doesn’t support SSE.
2025/09/11 01:16:13 HTTP 200 GET 0.1ms /.well-known/oauth-authorization-server
2025/09/11 01:16:13 HTTP 200 GET 2.5ms /
2025/09/11 01:16:14 HTTP 404 GET 0.2ms /favicon.svg
2025/09/11 01:16:14 HTTP 404 GET 0.2ms /favicon.png
2025/09/11 01:16:14 HTTP 200 GET 0.2ms /favicon.ico
2025/09/11 01:16:14 HTTP 200 GET 0.1ms /.well-known/oauth-authorization-server
2025/09/11 01:16:15 HTTP 201 POST 0.3ms /mcp/register
2025/09/11 01:16:27 HTTP 200 GET 1.4ms /
with the frontend showing: "Error creating connector" and the network call showing:
{ "detail": "1 validation error for RegisterOAuthClientResponse\n Input should be a valid dictionary or instance of RegisterOAuthClientResponse [type=model_type, input_value='{\"client_id\":\"ChatGPT.Dd...client_secret_basic\"}\\n', input_type=str]\n For further information visit https://errors.pydantic.dev/2.11/v/model_type" }Two replies to this comment have failed to address my question. I must be missing something obvious. Does ChatGPT not have any MCP support outside of this, and I've just been living in an Anthropic-filled cave?
What’s being released here is really just proper MCP support in ChatGPT (like Claude has had for ages now) though their instructions regarding needing to specific about which tools to use make me wonder how effective it will be compared to Claude. I assume it’s hidden behind “Developer Mode” to discourage the average ChatGPT user from using it given the risks around giving an LLM read/write access to potentially sensitive data.
Since one of these replies is mine, let me clarify.
From the documentation:
When using developer mode, watch for prompt injections and
other risks, model mistakes on write actions that could
destroy data, and malicious MCPs that attempt to steal
information.
The first warning is equivalent to a SQL injection attack[0].The second warning is equivalent to promoting untested code into production.
The last warning is equivalent to exposing SSH to the Internet, configured such that your account does not require a password to successfully establish a connection, and then hoping no one can guess your user name.
From literally the very first sentences in the linked resource:
ChatGPT developer mode is a beta feature that provides full
Model Context Protocol (MCP) client support for all tools,
both read and write. It's powerful but dangerous ...
From what I’ve seen, most teams experimenting with MCP don’t grasp the risks. They are literally dropping auth tokens into plaintext config files.
The moment anything with file system access gets wired in, those tokens are up for grabs, and someone’s going to get burned.
You know they have 1b WAU right?