Posted by tamnd 2 days ago
[0] “Stuffing Context is not Memory, Updating Weights Is": https://www.youtube.com/watch?v=Jty4s9-Jb78
I have no doubt that using LLMs as a notebook, which many of us have been doing now (and the best end-user application of which is Google’s NotebookLM), are a viable path forward for knowledge management. I find myself going back to certain LLM conversations as a ‘running log’ of a given project (akin to notebook-style thinking/building).
But what about merging concepts from the version control world (Git/SVN), rather than the wiki world? Karpathy should do more explanation about what took him down the wiki route vs that, in these early days of using LLMs as notebooks.
This guy is not a good speaker. Is there any article about it?
This list is also part of my own contender in this race: https://zby.github.io/commonplace/ - my own LLM operated knowledge base (this is the html rendering of that KB - there is also the github repo linked there).
The main feature is that I use it to build a theory about such systems - and the neat trick is that llms can read this theory and implement it so the very theory works as an LLM runtime too.
It works for me - but it has some rough edges still - so I guess it is not for everyone.
Everything should live in the repo. Code and docs yes. But also the planning files, epics, work items, architectural documentation and decisions. Here is a small example of my Linux system doc: https://github.com/gchamon/archie/tree/main/docs
And you don't need to reinvent the wheel. Code docs can like either right next to it in the readme or in docs/ if it's too big for a single file or the context spams multiple modules. ADRs live in docs/architecture/decisions. Epics and Workitems can also live in the docs.
Everything is for agents and everything is for humans, unless put in AGENTS.md and docs/agents or something similar, and even those are for human too.
In a nutshell, put everything in the repo, reuse standards as much as possible, the idea being it's likely the structure is already embedded in the model, and always review documentation changes.
You just described spec-driven development.
I find it helps a LOT with discovery. Llm spends a lot less time figuring out where things are. It’s essentially “cached discovery”
Check it out: https://github.com/ractive/hyalo