Top
Best
New

Posted by xerzes 19 hours ago

Show HN: Ghidra MCP Server – 110 tools for AI-assisted reverse engineering(github.com)
269 points | 63 commentspage 3
bbayles 9 hours ago|
LLMs are very good at understanding decompiled code. I don't think people have updated on the fact that almost everything is effectively open source now!
treesknees 8 hours ago|
Being able to read some iteration of potential source code doesn’t make it open source. Licensing, copyright, build chains, rights to modify and redistribute, etc are all factors.
aetherspawn 14 hours ago||
I have this weird thing with Ghidra where I can’t get it to disassemble .s37 or .hex flash files for PPC (e200z4). The bytes show OK and I’m pretty sure I’m selecting the right language. Any insight on things to try would be appreciated.

IDA work(ed) fine but I misplaced my license somewhere.

underlines 11 hours ago||
Tool stuffing degrades LLM tool use quality. 100+ tools is crazy. We probably need a tool that does relevant tool retreaval and reranking lol
mrlnstk 13 hours ago||
Interesting project. In one of our reverse engineering projects we used Gemini to interpret the decompiled C code. Worked really well. Hope to publish it next month.
kfk 12 hours ago||
how do you handle intent orchestration? I see you have workflows, but imagine this is used in combination with other MCP servers, how do you make sure the prompt is sent to the right MCP server and that the right tool or chain of tools gets executed?
rustyhancock 16 hours ago||
Thank you for sharing this, it's a a huge amount of work and I now know how I'll be spending this weekend!
poly2it 9 hours ago||
I saw this earlier, but opted for LaurieWired's MCP because it had a nice README and seemed to be the most common. How does this one compare? Are there any benchmark or functionality comparisons?

https://github.com/LaurieWired/GhidraMCP

DonHopkins 13 hours ago||
How could this be more efficiently and elegantly refactored as an Anthropic or MOOLLM skill set that was composable and repeatable (skills calling other skills, and iterating over MANY fast skill calls, in ONE llm completion call, as opposed many slow MCP calls ping-ponging back and forth, waiting for network delay + tokenization/detokenization cost, quantization and distortion each round)?

What parts of Ghidra (like cross referencing, translating, interpreting text and code) can be "uplifted" and inlined into skills that run inside the LLM completion call on a large context window without doing token IO and glacially slow and frequently repeated remote procedure calls to external MCP servers?

https://news.ycombinator.com/item?id=46878126

>There's a fundamental architectural difference being missed here: MCP operates BETWEEN LLM complete calls, while skills operate DURING them. Every MCP tool call requires a full round-trip — generation stops, wait for external tool, start a new complete call with the result. N tool calls = N round-trips. Skills work differently. Once loaded into context, the LLM can iterate, recurse, compose, and run multiple agents all within a single generation. No stopping. No serialization.

>Skills can be MASSIVELY more efficient and powerful than MCP, if designed and used right. [...]

Leela MOOLLM Demo Transcript: https://github.com/SimHacker/moollm/blob/main/designs/LEELA-...

>I call this "speed of light" as opposed to "carrier pigeon". In my experiments I ran 33 game turns with 10 characters playing Fluxx — dialogue, game mechanics, emotional reactions — in a single context window and completion call. Try that with MCP and you're making hundreds of round-trips, each suffering from token quantization, noise, and cost. Skills can compose and iterate at the speed of light without any detokenization/tokenization cost and distortion, while MCP forces serialization and waiting for carrier pigeons.

speed-of-light skill: https://github.com/SimHacker/moollm/tree/main/skills/speed-o...

More: Speed of Light -vs- Carrier Pigeon (an allegory for Skills -vs- MCP):

https://github.com/SimHacker/moollm/blob/main/designs/SPEED-...

clint 9 hours ago||
i wonder how this compares to the work I've been doing @ 2389 with the binary-re skill: https://github.com/2389-research/claude-plugins/tree/main/bi...

Specifically the dynamic analysis skills could get a really big boost with this MCP server, I also wonder if this MCP server could be rephrased into a pure skill and not come with all the context baggage.

randomtoast 16 hours ago|
Now we just need to choose a game and run Claude Code with Ghidra MCP in a loop until the game is completely decompiled.
More comments...