Top
Best
New

Posted by futurisold 6/27/2025

SymbolicAI: A neuro-symbolic perspective on LLMs(github.com)
224 points | 62 commentspage 2
bjt12345 6/27/2025|
How did you sort out mapping python constructs to their semantic equivalents?

I hope you keep at this, you may be in the right place at the right time.

It's getting to the point where some of the LLMs are immediately just giving me answers in Python, which is a strong indication of what the future will look like with Agents.

futurisold 6/28/2025|
I'm struggling to understand the question. I'll revisit this when I wake up since it's quite late here.
bjt12345 7/5/2025||
I mean to ask, how did you decide the operators and functions to provide?
schmook 7/1/2025||
My only gripe with this is that there's alread a term in computer science for this concept of "design by contract".

It's called a type system.

What you need for this is a more expressive type system.

futurisold 6/28/2025||
Thank you to everyone for participating in the discussion and for your overall support! As I said, I didn't expect this. I'm always just an email or tweet away, so you know how to reach me. It was great talking to you all!
nbardy 6/27/2025||
I love the symbol LLM first approaches.

I built a version of this a few years ago as a LISP

https://github.com/nbardy/SynesthesiaLisp

futurisold 6/27/2025|
Very nice, bookmarked for later. Interestingly enough, we share the same timeline. ~2yo is when a lot of interesting work spawned as many started to tinker.
jaehong747 6/27/2025||
great job! it reminds me genaiscript. https://microsoft.github.io/genaiscript/

// read files

const file = await workspace.readText("data.txt");

// include the file

content in the prompt in a context-friendly way def("DATA", file);

// the task

$`Analyze DATA and extract data in JSON in data.json.`;

futurisold 6/27/2025||
Thank you! I'm not familiar with that project, will take a look
krackers 6/27/2025||
Some of this seems a bit related to Wolfram Mathematica's natural language capabilities.

https://reference.wolfram.com/language/guide/FreeFormAndExte...

It can (in theory) do very similar things, where natural-language input is a first class citizen of the language and can operate on other objects. The whole thing came out almost a decade before LLMs, I'm surprised that they haven't revamped it to make it really shine.

futurisold 6/27/2025|||
> I'm surprised that they haven't revamped it

No worries! I can't find it right now, but Wolfram had a stream (or short?) where he introduced "Function". We liked it so much we implemented it after one day. Usage: https://github.com/ExtensityAI/symbolicai/blob/main/tests/en...

futurisold 6/27/2025|||
Wolfram's also too busy running his TOE exps to focus on LLMs (quite sadly if you ask me).
xpitfire 6/27/2025||
We've been working on some exciting things with SymbolicAI and here a few things which might interest the HN community.

Two years ago, we built a benchmark to evaluate multistep reasoning, tool use, and logical capabilities in language models. It includes a quality measure to assess performance and is built on a plugin system we developed for SymbolicAI.

- Benchmark & Plugin System: https://github.com/ExtensityAI/benchmark

- Example Eval: https://github.com/ExtensityAI/benchmark/blob/main/src/evals...

We've also implemented some interesting concepts in our framework: - C#-style Extension Methods in Python: Using GlobalSymbolPrimitive to extend functionalities.

    - https://github.com/ExtensityAI/benchmark/blob/main/src/func.py#L146
- Symbolic <> Sub-symbolic Conversion: And using this for quality metrics, like a reward signal from the path integral of multistep generations. - https://github.com/ExtensityAI/benchmark/blob/main/src/func....

For fun, we integrated LLM-based tools into a customizable shell. Check out the Rick & Morty-styled rickshell:

- RickShell: https://github.com/ExtensityAI/rickshell

We were also among the first to generate a full research paper from a single prompt and continue to push the boundaries of AI-generated research:

- End-to-End Paper Generation (Examples): https://drive.google.com/drive/folders/1vUg2Y7TgZRRiaPzC83pQ...

- Recent AI Research Generation:

    - Three-Body Problem: https://github.com/ExtensityAI/three-body_problem  

    - Primality Test: https://github.com/ExtensityAI/primality_test 

    - Twitter/X Post: https://x.com/DinuMariusC/status/1915521724092743997 
Finally, for those interested in building similar services, we've had an open-source, MCP-like API endpoint service available for over a year:

- SymbolicAI API: https://github.com/ExtensityAI/symbolicai/blob/main/symai/en...

ian_medusa_ceo 6/29/2025||
I already created a neurosymbolic code generator. Based on bench marks it received a score of 99 percent.
ian_medusa_ceo 6/29/2025||
I did it for python and will come out with other languages afterwards
Aynur4 6/29/2025||
Did you use symbolicai for that?
ian_medusa_ceo 7/13/2025||
Yes
GZGavinZhao 6/27/2025||
not to be confused with symbolica.ai
futurisold 6/28/2025|
+
b0a04gl 6/27/2025|
this works like functional programming where every symbol is a pure value and operations compose into clean, traceable flows. when you hit an ambiguous step, the model steps in. just like IO in FP, the generative call is treated as a scoped side effect. this can engage your reasoning graph stays deterministic by default and only defers to the model when needed. crazy demo though, love it
futurisold 6/27/2025|
Yes, pretty much. We wanted it be functional from the start. Even low level, everything's functional (it's even called functional.py/core.py). We're using decorators everywhere. This helped a lot with refactoring, extending the framework, containing bugs, etc.