They'll also want the industry to rapidly move forward and connect data to AI. MCP has momemtum.
I'll believe in Google not actively being anti-competitive when I (a paying customer) can access/modify my gmail, google contacts, google sheets, plan routes in google maps, ... from my local llm chatbot using mcp.
- Client to server: "tell me what you can do". This has always been hard, but in the LLM era, it could potentially work, because a textural response would work.
- Similarly, being able to ask "How do I..." might be feasible now. It should be possible to talk to a new server and automatically figure out how to use it.
- "How much is this going to cost me?" Plus some way to set a cost limit on a query.
Just seems like i+1 syndrome with computing.
It's so clearly a dead-end. It gives freethinking developers and innovators time to focus on the next generation of software.
Model Context Protocol seems good enough to me.
I'm guessing the main limitation is that it's harder to orchestrate, especially on clients.
https://news.ycombinator.com/item?id=43631381
"The Agent2Agent Protocol (A2A)", 279 comments
Marketing the API as OpenAI-compatible and then me getting 400s when I switch to Gemini leaves a sour taste in the mouth, and doesn’t make me confident about their MCP support.
Tools often need access to data sources but I don't want to hard code passwords.
It runs on x86 processors (under emulation), so it'd make some sense if Google offered it as an option in Google Cloud. Maybe they could offer OS2000, GCOS, and GECOS as well.