Top
Best
New

Posted by knowsuchagency 6 hours ago

Show HN: Mcp2cli – One CLI for every API, 96-99% fewer tokens than native MCP(github.com)
Every MCP server injects its full tool schemas into context on every turn — 30 tools costs ~3,600 tokens/turn whether the model uses them or not. Over 25 turns with 120 tools, that's 362,000 tokens just for schemas.

mcp2cli turns any MCP server or OpenAPI spec into a CLI at runtime. The LLM discovers tools on demand:

    mcp2cli --mcp https://mcp.example.com/sse --list             # ~16 tokens/tool
    mcp2cli --mcp https://mcp.example.com/sse create-task --help  # ~120 tokens, once
    mcp2cli --mcp https://mcp.example.com/sse create-task --title "Fix bug"
No codegen, no rebuild when the server changes. Works with any LLM — it's just a CLI the model shells out to. Also handles OpenAPI specs (JSON/YAML, local or remote) with the same interface.

Token savings are real, measured with cl100k_base: 96% for 30 tools over 15 turns, 99% for 120 tools over 25 turns.

It also ships as an installable skill for AI coding agents (Claude Code, Cursor, Codex): `npx skills add knowsuchagency/mcp2cli --skill mcp2cli`

Inspired by Kagan Yilmaz's CLI vs MCP analysis and CLIHub.

https://github.com/knowsuchagency/mcp2cli

75 points | 41 commentspage 2
ejoubaud 3 hours ago|
How does this differ from mcporter? https://github.com/steipete/mcporter/
philipp-gayret 4 hours ago||
Someone had to do it. mcp in bash would make them composable, which I think is the strongest benefit for high capability agents like Claude, Cursor and the like, who can write Bash better than I. Haven't gotten into MCP since early release because of the issues you named. Nice work!
silverwind 3 hours ago||
How would the LLM exactly discover such unknown CLI commands?
kristopolous 43 minutes ago||
I've got a qdrant based approach that I'm working on that solves that here: https://github.com/day50-dev/infinite-mcp

Essentially I've cloned thousands of mcp servers, used the readmes and the star rating to respond to the qdrant query (star ratings as a boost score have been an attack vector, yes I know, it's an incomplete product [1]), then presents it as a JSON response with "one-shots" which this author calls clis.

I think I became discouraged from working on it and moved on because my results weren't that great but search is hard and I shouldn't give up.

I'll get back on it seeing how good this tool is getting traction.

[1] There needs to be a legitimacy post-filter so that github user micr0s0ft or what-have-you doesn't go to to the top - I'm sure there's some best-of-practice ways of doing this and I shouldn't invent my own (which would involve seeing if the repo appears on non-UGC sites I guess?!) but I haven't looked into it

Mashimo 3 hours ago||
Skills or tell it the --list command would be my guess.
jkisiel 3 hours ago||
How is it different from 'mcporter', already included in eg. openclaw?
Ozzie_osman 3 hours ago||
I kind of feel like it might be better to go from CLI to MCP.
tuananh 3 hours ago||
mcp just need to add dynamic tools discovery and lazy load them, that would solve this token problem right?
rvz 3 hours ago||
MCP itself is a flawed standard to being with as I said before [0] and its wraps around an API from the start.

You might as well directly create a CLI tool that works with the AI agents which does an API call to the service anyway.

[0] https://news.ycombinator.com/item?id=44479406

techpulse_x 3 hours ago||
[dead]
yogin16 2 hours ago|
[dead]