Top
Best
New

Posted by denssumesh 1 day ago

We replaced RAG with a virtual filesystem for our AI documentation assistant(www.mintlify.com)
141 points | 69 comments
kjgkjhfkjf 25 seconds ago|
Seems like it would be simpler to give the agent tools to issue ChromaDB (or SQL) queries directly, rather than giving the LLM unix-like tools that are converted into queries under the hood using a complicated proprietary setup.
tensor 8 minutes ago||
This is one of the most confusing claims I've seen in a long time. Grep and others over files would be the equivalent of an old fashioned keyword search where most RAG uses vector search. But everything else they claim about a file system just suggests that they don't know anything about databases.

I'm not familiar with how most out of the box RAG systems categorize data, but with a database you can index content literally in any way you want. You could do it like a filesystem with hierarchy, you could do it tags, or any other design you can dream up.

The search can be keyword, like grep, or vector, like rag, or use the ranking algorithms that traditional text search uses (tf-idf, BM25), or a combination of them. You don't have to use just the top X ranked documents, you could, just like grep, evaluate all results past whatever matching threshold you have.

Search is an extremely rich field with a ton of very good established ways of doing things. Going back to grep and a file system is going back to ... I don't know, the 60s level of search tech?

brap 1 minute ago|
I get what you’re saying, and you’re right, however I can also see where they’re coming from:

Empirically, agents (especially the coding CLIs) seem to be doing so much better with files, even if the tooling around them is less than ideal. With other custom tools they instantly lose 50 IQ points, if they even bother using the tools in the first place.

softwaredoug 3 hours ago||
The real thing I think people are rediscovering with file system based search is that there’s a type of semantic search that’s not embedding based retrieval. One that looks more like how a librarian organizes files into shelves based on the domain.

We’re rediscovering forms of in search we’ve known about for decades. And it turns out they’re more interpretable to agents.

https://softwaredoug.com/blog/2026/01/08/semantic-search-wit...

wielebny 2 hours ago||
Someone simply assumed at some point that RAG must be based on vector search, and everyone followed.
softwaredoug 2 hours ago|||
It’s something of a historical accident

We started with LLMs when everyone in search was building question answering systems. Those architectures look like the vector DB + chunking we associate with RAG.

Agents ability to call tools, using any retrieval backend, call that into question.

We really shouldn’t start RAG with the assumption we need that. I’ll be speaking about the subject in a few weeks

https://maven.com/p/7105dc/rag-is-the-what-agentic-search-is...

TeMPOraL 2 hours ago|||
Right. R in RAG stands for retrieval, and for a brief moment initially, it meant just that: any kind of tool call that retrieves information based on query, whether that was web search, or RDBMS query, or grep call, or asking someone to look up an address in a phone book. Nothing in RAG implies vector search and text embeddings (beyond those in the LLM itself), yet somehow people married the acronym to one very particular implementation of the idea.
macNchz 2 minutes ago|||
Yeah there's a weird thing where people would get really focused on whether something is "actually doing RAG" when it's pulling in all sorts of outside information, just not using some kind of purpose built RAG tooling or embeddings.

Now, the pendulum on that general concept seems to be swinging the opposite direction where a lot of those people just figured out that you don't need embeddings. That's true, but I'd suggest that people don't overindex on thinking that means embeddings are not actually useful or valuable. Embeddings can be downright magical in what you can build with them, they're just one more tool at your disposal.

You can mix and match these things, too! Indexing your documents into semantically nested folders for agents to peruse? Try chunking and/or summarizing each one, and putting the vectors in sidecar files, or even Yaml frontmatter. Disks are fast these days, you can rip through a lot of files indexed like that before you come close to needing something more sophisticated.

oceansky 1 hour ago|||
I'm still using the old definition, never got the memo.
adfm 1 hour ago||
That’s OK. Most got ReST wrong, too.
rafterydj 1 hour ago||||
Stuck it on my calendar, looking forward to it.
KPGv2 1 hour ago|||
You seem like someone who knows what they're doing, and I understand the theoretical underpinnings of LLMs (math background), but I have little kids that were born in 2016 and so the entire AI thing has left me in the dust. Never any time to even experiment.

I am active in fandoms and want to create a search where someone can ask "what was that fanfic where XYZ happened?" and get an answer back in the form of links to fanfiction that are responsive.

This is a RAG system, right? I understand I need an actual model (that's something like ollama), the thing that trawls the fanfiction archive and inserts whatever it's supposed to insert into one of these vector DBs, and I need a front-facing thing I write, that takes a user query, sends it to ollama, which can then search the vector DB and return results.

Or something like that.

Is it a RAG system that solves my use case? And if so, what software might I go about using to provide this service to me and my friends? I'm assuming it's pretty low in resource usage since it's just text indexing (maybe indexing new stuff once a week).

The goal is self-hosting. I don't wanna be making monthly payments indefinitely for some silly little thing I'm doing for me and my friends.

I am just a stay at home dad these days and don't have anyone to ask. I'm totally out the tech game for a few years now. I hope that you could respond (or someone else could), and maybe it will help other people.

There's just so many moving parts these days that I can't even hope to keep up. (It's been rather annoying to be totally unable to ride this tech wave the way I've done in the past; watching it all blow by me is disheartening).

9dev 23 minutes ago|||
In the definition of RAG discussed here, that means the workflow looks something like this (simplified for brevity): When you send your query to the server, it will first normalise the words, then convert them to vectors, or embeddings, using an embedding model (there are also plain stochastic mechanisms to do this, but today most people mean a purpose-built LLM). An embedding is essentially an array of numeric coordinates in a huge-dimensional space, so [1, 2.522, …, -0.119]. It can now use that to search a database of arbitrary documents with pre-generated embeddings of their own. This usually happens during inserting them to the database, and follows the same process as your search query above, so every record in the database has its own, discrete set of embeddings to be queried during searches.

The important part here is that you now don’t have to compare strings anymore (like looking for occurrences of the word "fanfiction" in the title and content), but instead you can perform arbitrary mathematical operations to compare query embeddings to stored embeddings: 1 is closer to 3 than 7, and in the same way, fanfiction is closer to romance than it is to biography. Now, if you rank documents by that proximity and take the top 10 or so, you end up with the documents most similar to your query, and thus the most relevant.

That is the R in RAG; the A as in Augmentation happens when, before forwarding the search query to an LLM, you also add all results that came back from your vector database with a prefix like "the following records may be relevant to answer the users request", and that brings us to G like Generation, since the LLM now responds to the question aided by a limited set of relevant entries from a database, which should allow it to yield very relevant responses.

I hope this helps :-)

johnathandos 1 hour ago|||
I think the example you give is a little backwards — a RAG system searches for relevant content before sending anything to the LLM, and includes any content retrieved this way in the generative prompt. User query -> search -> results -> user query + search results passed in same context to LLM.
ivanovm 58 minutes ago||||
I don't think this was a simple assumption. LLMs used to be much dumber! GPT-3 era LLMS were not good at grep, they were not that good at recovering from errors, and they were not good at making followup queries over multiple turns of search. Multiple breakthroughs in code generation, tool use, and reasoning had to happen on the model side to make vector-based RAG look like unnecessary complexity
bluegatty 2 hours ago||||
It was the terminology that did that more than anything. The term 'RAG' just has a lot of consequential baggage. Unfortunately.
morkalork 2 hours ago|||
Doesn't have to be tho, I've had great success letting an agent loose on an Apache Lucene instance. Turns out LLMs are great at building queries.
czhu12 1 hour ago|||
Similar effort with PageIndex [1], which basically creates a table of contents like tree. Then an LLM traverses the tree to figure out which chunks are relevant for the context in the prompt.

1: https://github.com/VectifyAI/PageIndex

khalic 2 hours ago|||
This kind of circles back to ontological NLP, that was using knowledge representation as a primitive for language processing. There is _a ton_ of work in that direction.
softwaredoug 2 hours ago||
Exactly. And LLMs supervised by domain experts unlock a lot of capabilities to help with these types of knowledge organization problems.
skeptrune 2 hours ago|||
I think it's cool that LLMs can effectively do this kind of categorization on the fly at relatively large scale. When you give the LLM tools beyond just "search", it really is effectively cheating.
UltraSane 2 hours ago|||
Inverted indexes have the major advantages of supporting Boolean operators.
whattheheckheck 2 hours ago||
Turns out the millions of people in knowledge work arent librarians and they wing shit everywhere
nlawalker 12 minutes ago||
Relative to making docs accessible to AI via filesystem tools, I've been looking around to see what kinds of patterns SDK authors are using to get AI coding agents to use the freshest documentation, and Vercel is doing something interesting with their AI SDK that I haven't seen elsewhere (although maybe I just haven't looked hard enough).

The "ai" npm package includes a root-level docs folder containing .mdx versions of the docs from their site, specific to the version of the package. Their intended AI-assisted developer experience is that people discover and install their ai-sdk skill (via their npx skills tool, which supports discovery and install of skills from most any provider, not just Vercel). The SKILL.md instructs the agent to explicitly ignore all knowledge that may have been trained into its model, and to first use grep to look for docs in node_modules/ai/docs/ before searching the website.

https://github.com/vercel/ai/blob/main/skills/use-ai-sdk/SKI...

sunir 1 hour ago||
I am really enjoying this renaissance in CLI world applications. There's so much possible.

I'm working on a related challenge which is mounting a virtual filesystem with FUSE that mirrors my Mac's actual filesystem (over a subtree like ~/source), so I can constrain the agents within that filesystem, and block destructive changes outside their repo.

I have it so every repo has its own long-lived agent. They do get excited and start changing other repos, which messes up memory.

I didn't want to create a system user per repo because that's obnoxious, so I created a single claude system user, and I am using the virtual file system to manage permissions. My gmail repo's agent can for instance change the gmail repo and the google_auth repo, but it can't change the rag repo.

Edit: I'm publishing it here. It's still under development. https://github.com/sunir/bashguard

zbyforgotpass 14 minutes ago||
I don't know - we are discussing techniques - like having information in files, or in a semantic database, or in a relational database - as if there was one way that could dominate all information access. But finding the right information is not one task - if the needed information is a summary of expenses from a period of time then the best source of it will be a relational database, if it is who is the head of the HR department in a particular company - then it could probably be easy found on the company intranet pages (which are kind of graph database). It does not really matter much if the searcher is a human or LLM - there are some differences in the speed, the one time useful context length and the fact that LLMs are amnesiac - but these are just parameters, the task for humans is immensely complicated and there is no one architecture and there will not be one for LLMs.

I also vibed a brainstorming note with my knowledge base system. The initial prompt: """when I read "We replaced RAG with a virtual filesystem for our AI documentation assistant (mintlify.com)" title on HackerNews - the discussion is about RAG, filesystems, databases, graphs - but maybe there is something more fundamental in how we structure the systems so that the LLM can find the information needed to answer a question. Maybe there is nothing new - people had elaborate systems in libraries even before computers - but maybe there is something. Semantic search sounds useful - but knowing which page to return might be nearly as difficult as answering the question itself - and what about questions that require synthesis from many pages? Then we have distillation - an table of content is a kind of distillation targeting the task of search. """ Then I added a few more comments and the llm linked the note with the other pages in my kb. I am documenting that - because there were many voices against posting LLM generated content and that a prompt will be enough. IMHO the prompt is not enough - because the thought was also grounded in the whole theory I gathered in the KB. And that is also kind of on topic here. Anyway - here is the vibed note: https://zby.github.io/commonplace/notes/charting-the-knowled...

pwr1 30 minutes ago||
This mirrors something we ran into building an AI pipeline for audio content. The problem with traditional RAG is that chunking destroys the structure that actually matters — you end up retrieving fragments that are semantically similar but contextually useless.

The filesystem metaphor works because it preserves heirarchy. Documents have sections, sections have relationships, and those relationships carry meaning that gets lost when you flatten everything into embeddings.

Curious how this handles versioning though. Docs change constantly and stale context fed to an LLM is arguably worse than no context at all.

namxam 24 minutes ago||
And you did not teach it to access chroma directly, because there is no adapter? Or because it is so much better at using FS tooling?

But in the end, I would expect, that you could add a skill / instructions on how to use chromadb directly

To be honest, I have no idea what chromadb is or how it works. But building an overlay FS seems like quite lot of work.

Galanwe 2 hours ago||
I am not familiar with the tech stack they use, but from an outsider point of view, I was sort of expecting some kind of fuse solution. Could someone explain why they went through a fake shell? There has to be a reason.
skeptrune 2 hours ago|
100% agree a FUSE mount would be the way to go given more time and resources.

Putting Chroma behind a FUSE adapter was my initial thought when I was implementing this but it was way too slow.

I think we would also need to optimize grep even if we had a FUSE mount.

This was easier in our case, because we didn’t need a 100% POSIX compatibility for our read only docs use case because the agent used only a subset of bash commands anyway to traverse the docs. This also avoids any extra infra overhead or maintenance of EC2 nodes/sandboxes that the agent would have to use.

readitalready 1 hour ago|||
Yah my Claude Code agents run a ton of Python and bash scripts. You're probably missing out on a lot of tool use cases without full tool use through POSIX compatibility.
Galanwe 1 hour ago|||
Makes sense, thanks for clarifying!
seanlinehan 3 hours ago|
This is definitely the way. There are good use cases for real sandboxes (if your agent is executing arbitrary code, you better it do so in an air-gapped environment).

But the idea of spinning up a whole VM to use unix IO primitives is way overkill. Makes way more sense to let the agent spit our unix-like tool calls and then use whatever your prod stack uses to do IO.

skeptrune 2 hours ago|
100% agree. However, if there were no resource tradeoffs, then a FUSE mount would probably be the way to go.
More comments...