Posted by gronky_ 3/26/2025
I built https://skeet.build/mcp where anyone can try out mcp for cursor and now OpenAI agents!
We did this because of a painpoint I experienced as an engineer having to deal with crummy mcp setup, lack of support and complexity trying to stand up your own.
Mostly for workflows like:
* start a PR with a summary of what I just did * slack or comment to linear/Jira with a summary of what I pushed * pull this issue from sentry and fix it * Find a bug a create a linear issue to fix it * pull this linear issue and do a first pass * pull in this Notion doc with a PRD then create an API reference for it based on this code * Postgres or MySQL schemas for rapid model development
Everyone seems to go for the hype but ease of use, practical pragmatic developer workflows, and high quality polished mcp servers are what we’re focused on
Lmk what you think!
Although the name is a bit unfortunate.
> While the project will always remain open-source and aims to be a universal AI assistant tool, the officially developed 'skills' and 'recipes' (allowing AI to interact with the external world through Ainara's Orakle server) will primarily focus on cryptocurrency integrations. The project's official token will serve as the payment method for all related services.
It wasn't bad enough that we now run servers locally to constantly compile code and tell us via json we made a typo... Soon we won't even be editing files on a disk, but accessing them through a json-rpc client/server? Am I getting this wrong?
I was able to make a pretty complex MCP server in 2 days for LLM task delegation:
service providers get more traffic, so they’re into it. makes sense.
claude 3.5 was great at the time, especially for stuff like web dev. but now deepseek v3 (0324) is way better value. gemini's my default for multimodal. openai still feels smartest overall. i’ve got qwq running locally. for deep research, free grok 3 and perplexity work fine. funny enough, claude 3.7 being down these two days didn’t affect me at all.
i checked mcp since i contribute to open source, but decided to wait. few reasons:
- setup’s kind of a mess. it’s like running a local python or node bridge, forwarding stuff via sse or stdio. feels more like bridging than protocol innovation
- I think eventually we need all the app to be somehow built-in AI-first protocol. I think only Apple (maybe Google) have that kind of influence. Think about the lightening vs usb-c.
- performance might be a bottleneck later, especially for multimodal.
- same logic as no.2 but the question is that do you really want every app to be AI-first?
main issue for me: tools that really improve productivity are rare. a lot of mcp use cases sound cool, but i'll never give full github access to a black box. same for other stuff. so yeah—interesting idea, but hard ceiling.
Second, isn't that doable with API calling :)
E.g. suppose I want my agent to operate as Discord bot listening on channel via an MCP server subscribed to the messages. i.e. the MCP server itself is driving the loop, not the framework, with the agent doing the processing.
I can see how this could be implemented using MCP resource pubsub, with the plugin and agent being aware of this protocol and how to pump the message bus loop, but I'd rather not reinvent it.
Is there a standard way of doing this already? Is it considered user logic that's "out of scope" for the MCP specification?
EDIT: added an example here https://github.com/avaer/mcp-message-bus
Does all of the event stuff and state stuff for you. Plus the orchestration.
It benefits MCP clients (ChatGPT, Claude, Cursor, Goose) more than the MCP servers and the service behind the MCP servers (GitHub, Figma, Slack).