Top
Best
New

Posted by shloked 2 days ago

Claude’s memory architecture is the opposite of ChatGPT’s(www.shloked.com)
437 points | 229 commentspage 2
nilkn 1 day ago|
ChatGPT is designed to be addictive, with secondary potential for economic utility. Claude is designed to be economically useful, with secondary potential for addiction. That’s why.

In either case, I’ve turned off memory features in any LLM product I use. Memory features are more corrosive and damaging than useful. With a bit of effort, you can simply maintain a personal library of prompt contexts that you can just manually grab and paste in when needed. This ensures you’re in control and maintains accuracy without context rot or falling back on the extreme distortions that things like ChatGPT memory introduce.

simonw 1 day ago||
This post was great, very clear and well illustrated with examples.
shloked 1 day ago|
Thanks Simon! I’m a huge fan of your writing
simonw 1 day ago||
Just blogged about your posts here https://simonwillison.net/2025/Sep/12/claude-memory/
shloked 1 day ago||
Wow! Thank you for the support Simon :)
doctoboggan 1 day ago||
I've been using LLMs for a long time, and I've thus far avoided memory features due to a fear of context rot.

So many times my solution when stuck with an LLM is to wipe the context and start fresh. I would be afraid the hallucinations, dead-ends, and rabbit holes would be stored in memory and not easy to dislodge.

Is this an actual problem? Does the usefulness of the memory feature outweigh this risk?

Nestorius 1 day ago||
Regarding https://www.shloked.com/writing/chatgpt-memory-bitter-lesson I am very confused if the author thinks the ChatGPT is injecting those prompts when the memory is not enabled. If your memory is not enabled, its pretty clear at least in my instance, there is no metadata of recent conversations or personal preferences injected. The conversation stays stand-alone for that conversation only. If he was turning memory on and off for the experiment, maybe something got confused, or maybe I just didn't read the article properly?
jimmyl02 1 day ago||
This is awesome! It seems to line up with the idea of agentic exploration versus RAG which I think Anthropic leans on the agentic exploration side of.

It will be very interesting to see which approach is deemed to "win out" in the future

patrickhogan1 1 day ago||
Interesting article! I keep second guessing whether it’s worth it to point out mistakes to the LLM for it to improve in the future.
almosthere 1 day ago||
ChatGPT memory seems weird to me. It knows the company I work at and pretty much our entire stack - but when I go to view it's stored memories none of that is written anywhere.
vexna 1 day ago||
ChatGPT has 2 types of memory: The “explicit” memory you tell it to remember (sometimes triggers when it thinks you say something important) and the global/project level automated memory that are stored as embeddings.

The explicit memory is what you see in the memory section of the UI and is pretty much injected directly into the system prompt.

The global embeddings memory is accessed via runtime vector search.

Sadly I wish I could disable the embeddings memory and keep the explicit. The lossy nature of embeddings make it hallucinate a bit too much for my liking and GPT-5 seems to have just made it worse.

everybodyknows 1 day ago||
How does the "Start New Chat" button modulate or select between the two types of memory you describe?
sunaookami 1 day ago||
Did you maybe talk about this in another chat? ChatGPT also uses past chats as memory.
SweetSoftPillow 1 day ago||
If I remember correctly, Gemini also have this feature? Is it more like Claude or ChatGPT?
jiri 1 day ago||
I am often surprised how Claude Code make efficient and transparent! use of memory in form of "to do lists" in agent mode. Sometimes miss this in web/desktop app in long conversations.
anonbuddy 1 day ago|
memory is the biggest moat, do we really want to live in the future where one or two corporations know us better than we know ourselves?
More comments...