Posted by yi_wang 3 hours ago
It compiles to a single ~27MB binary — no Node.js, Docker, or Python required.
Key features:
- Persistent memory via markdown files (MEMORY, HEARTBEAT, SOUL markdown files) — compatible with OpenClaw's format - Full-text search (SQLite FTS5) + semantic search (local embeddings, no API key needed) - Autonomous heartbeat runner that checks tasks on a configurable interval - CLI + web interface + desktop GUI - Multi-provider: Anthropic, OpenAI, Ollama etc - Apache 2.0
Install: `cargo install localgpt`
I use it daily as a knowledge accumulator, research assistant, and autonomous task runner for my side projects. The memory compounds — every session makes the next one better.
GitHub: https://github.com/localgpt-app/localgpt Website: https://localgpt.app
Would love feedback on the architecture or feature ideas.
├── MEMORY.md # Long-term knowledge (auto-loaded each session)
├── HEARTBEAT.md # Autonomous task queue
├── SOUL.md # Personality and behavioral guidance
Say what you will, but AI really does feel like living in the future. As far as the project is concerned, pretty neat, but I'm not really sure about calling it "local-first" as it's still reliant on an `ANTHROPIC_API_KEY`.I do think that local-first will end up being the future long-term though. I built something similar last year (unreleased) also in Rust, but it was also running the model locally (you can see how slow/fast it is here[1], keeping in mind I have a 3080Ti and was running Mistral-Instruct).
I need to re-visit this project and release it, but building in the context of the OS is pretty mindblowing, so kudos to you. I think that the paradigm of how we interact with our devices will fundamentally shift in the next 5-10 years.
See here:
https://github.com/localgpt-app/localgpt/blob/main/src%2Fage...
Does this mean the inference is remote and only context is local?
The ReadMe gives only a Antropic version example, but, judging by the source code [1], you can use other providers, including Ollama, just by changing the syntax of that one config file line.
[1] https://github.com/localgpt-app/localgpt/blob/main/src%2Fage...
https://github.com/localgpt-app/localgpt/blob/main/src%2Fage...
Your docs and this post is all written by an LLM, which doesn't reflect much effort.
I was also burnt many times where some software docs said one thing and after many hours of debugging I found out that code does something different.
LLMs are so good at creating decent descriptions and keeping them up to date that I believe docs are the number one thing to use them for. yes, you can tell human didn't write them, so what? if they are correct I see no issue at all.
Indeed. Are you verifying that they are correct, or are you glancing at the output and seeing something that seems plausible enough and then not really scrutinizing? Because the latter is how LLMs often propagate errors: through humans choosing to trust the fancy predictive text engine, abdicating their own responsibility in the process.
As a consumer of an API, I would much rather have static types and nothing else than incorrect LLM-generated prosaic documentation.
Somehow I doubt at this point in time they can even fail at something so simple.
Like at some point, for some stuff we have to trust LLMs to be correct 99% of the time. I believe summaries, translate, code docs are in that category
I wish this was an effective deterrent against posting low effort slop, but it isn't. Vibe coders are actively proud of the fact that they don't put any effort into the things they claim to have created.
Professional codependent leveraging anonymity to target others. The internet is a mediocrity factory.
"cargo install localgpt" under Linux Mint.
Git clone and change Cargo.toml by adding
"""rust
# Desktop GUI
eframe = { version = "0.30", default-features = false,
features = [ "default_fonts", "glow", "persistence", "x11", ] }
"""
That is add "x11"
Then cargo build --release succeeds.
I am not a Rust programmer.
cd localgpt/
edit cargo.toml and add "x11" to eframe
cargo install --path ~/.cargo/bin
Hay! is that Kai Lentit guy hiring?
Uses Mlx for local llm on apple silicon. Performance has been pretty good for a basic spec M4 mini.
Nor install the little apps that I don't know what they're doing and reading my chat history and mac system folders.
What I did was create a shortcut on my iphone to write imessages to an iCloud file, which syncs to my mac mini (quick) - and the script loop on the mini to process my messages. It works.
Wonder if others have ideas so I can iMessage the bot, im in iMessage and don't really want to use another app.
Its fast and amazing for generating embedding and lookups
I assume I could just adjust the toml to point to deep seek API locally hosted right?