Top
Best
New

Posted by andros 2 days ago

From zero to a RAG system: successes and failures(en.andros.dev)
236 points | 72 comments
diarmuidc 6 hours ago|
>After several weeks, between 2 and 3, the indexing process finished without failures. ... we could finally shut down the virtual machine. The cost was 184 euros on Hetzner, not cheap.

184euro is loose change after spending 3 man weeks working on the process!

mrits 5 hours ago|
That's the budget I'd have for the coffee shop with the team to discuss budget
EGreg 4 hours ago||
That's the budget to discuss and approve the above coffee shop budget
davidwritesbugs 2 hours ago||
Pffft, that's the budget for the paperclips to hold the meeting notes together
maxperience 2 hours ago||
This article is interesting cause of its scale, but does not touch on how to properly use RAG best practices. We wrote up this blog post on how to actually build a smart enterprise AI RAG based on the latest research if it's interesting to anyone: https://bytevagabond.com/post/how-to-build-enterprise-ai-rag...

It's based on different chunking strategies that scale cheaply and advanced retrieval

mettamage 1 hour ago|
[dead]
_the_inflator 3 hours ago||
I implemented many RAGs and feel sorry for anyone proclaiming "RAG is dead". These folks have never implemented one, maybe followed a tutorial and installed a "Hello World!" project but that's it.

I don't want to go into detail but 100% agree with the author's conclusion: data is key. Data ingestion to be precisely. Simply using docling and transforming PDFs to markdown and have a vector database doing the rest is ridiculous.

For example, for a high precision RAG with 100% accuracy in pricing as part of the information that RAG provided, I took a week to build a ETL for a 20 page PDF document to separate information between SQL and Graph Database.

And this was a small step with all the tweaking that laid ahead to ensure exceptional results.

What search algorithm or: how many? Embeddings, which quality? Semantics, how and which exactly?

Believe me, RAG is the finest of technical masterpiece there is. I have so many respect for the folks at OpenAI and Anthropic for the ingestion processes and tools they use, because they operate on a level, I will never touch with my RAG implementations.

RAG is really something you should try for yourself, if you love to solve tricky fundamental problems that in the end can provide a lot of value to you or your customers.

Simply don't believe the hype and ignore all "install and embed" solutions. They are crap, sorry to say so.

RansomStark 2 hours ago||
I have proclaimed RAG is dead many times, and I stand by it.

RAG is Dead! Long Live Agentic RAG! || Long Live putting stuff in databases where it damn well belongs!

I think you agree with the people saying RAG is Dead, or at least you agree with me and I say RAG is Dead, when you say "Simply using docling and transforming PDFs to markdown and have a vector database doing the rest is ridiculous."

I fully agree, but that was the promise of RAG, chunk your documents into little bits and find the bit that is closet to the users query and add it to the context, maybe leave a little overlap on the chunks, is how RAG was initially presented, and how many vendors implement RAG, looking at tools like Amazon Bedrock Knowledge Bases here.

When I want to know the latest <important financial number>, I want that pulled that from the source of truth for that data, not hopefully get the latest and not last years number from some document chunk.

So, when people, or at least when I say RAG is Dead, it's short hand for: this is really damn complex, and vector search doesn't replace decades of information theory, storage and retrieval patterns.

Hell, I've worked with teams trying to extract everything from databases to push it into vector stores so the LLM can use the data. First, it often failed as they had chunks with multiple rows of data, and the LLM got confused as to which row actually mattered, they hadn't realized that the full chunk would be returned and not just the row they were interested in. Second, the use cases being worked on by these teams were usually well defined, that is, the required data could be deterministically defined before going to the LLM and pulled from a database using a simple script, no similarity required, but that's not the cool way to do it.

joefourier 2 hours ago|||
I agree with you that simple vector search + context stuffing is dead as a method, but I think it's ridiculous to reserve the term "RAG" for just the earliest most basic implementation. The definition of Retrieval Augmented Generation is any method that tries to give the LLM relevant data dynamically as opposed to relying purely on it memorising training data, or giving it everything it could possibly need and relying on long context windows.

The RAG system you mentioned is just RAG done badly, but doing it properly doesn't require a fundamentally different technique.

whakim 2 hours ago|||
I don't think we should undersell that transformers and semantic search are really powerful information retrieval tools, and they are extremely potent for solving search problems. That being said, I think I agree with you that RAG is fundamentally just search, and the hype (like any hype) elides the fact that you still have to solve all of the normal, difficult search problems.
maCDzP 2 hours ago||
Do you have any good resources for what you are describing?
JKCalhoun 7 hours ago||
And some have been saying that RAGs are obsolete—that the context window of a modern LLM is adequate (preferable?). The example I recently read was that the contexts are large enough for the entire "The Lord of the Rings" books.

That may be, but then there's an entire law library, the entirety of Wikipedia (and the example in this article of 451 GB). Surely those are at least an order of magnitude larger than Tolkien's prose and might still benefit from a RAG.

menaerus 6 hours ago||
The success of the model responding to you with a correct information is a function of giving it proper context too.

That hasn't changed nor I think it will, even with the models having very large context windows (eg Gemini has 2M). It is observed that having a large context alone is not enough and that it is better to give the model sufficiently enough and quality information rather than filling it with virtually everything. Latter is also impossible and does not scale well with long and complicated tasks where reaching the context limit is inevitable. In that case you need to have the RAG which will be smart enough to extract the sufficient information from previous answers/context, and make it part of the new context, which in turn will make it possible for the model to keep its performance at satisfactory level.

alansaber 6 hours ago|||
RAG is nowhere near obselete. Model performance on enormous sequences degrades hugely as they are not well represented in training and non quadratic attention approximations are not amazing
dgb23 5 hours ago|||
Also the thing with context is that you want to keep it focused on the task at hand.

For example there's evidence that typical use of AGENTS.md actually doesn't improve outcomes but just slows the LLMs down and confuses them.

In my personal testing and exploration I found that small (local) LLMs perform drastically better, both in accuracy and speed, with heavily pruned and focused context.

Just because you can fill in more context, doesn't mean that you should.

The worry I have is that common usage will lead to LLMs being trained and fined tuned in order to accommodate ways of using them that doesn't make a lot of sense (stuffing context, wasting tokens etc.), just because that's how most people use them.

ravikirany22 3 hours ago||
This matches what we've been seeing empirically. The issue isn't just quantity of context — it's staleness. AGENTS.md and CLAUDE.md that reference renamed functions, deleted interfaces, or outdated patterns actively mislead the model with confident but wrong information.We've been auditing TypeScript repos and finding 10-84% of symbol references in AI config files are stale. A model reading a CLAUDE.md that says "use UserService.createUser()" when that function was renamed three weeks ago isn't just getting irrelevant context — it's getting a confident lie.The quality problem is probably as significant as the quantity problem, maybe more so.
dgb23 2 hours ago||
Interesting. It seems to me that the right approach is to have a structured way to navigate a codebase and useful, validated docs (with examples that need to pass tests) rather than ad-hoc markdown prompts laying around and are always read. We already have solutions for this like doc comments/strings, meta data etc. The codebase itself needs to be well-maintained.
btown 4 hours ago|||
I do think that what we think of as RAG will change!

When any given document can fit into context, and when we can generate highly mission-specific summarization and retrieval engines (for which large amounts of production data can be held in context as they are being implemented)... is the way we index and retrieve still going to be based on naive chunking, and off-the-shelf embedding models?

For instance, a system that reads every article and continuously updates a list of potential keywords with each document and the code assumptions that led to those documents being generated, then re-runs and tags each article with those keywords and weights, and does the same to explode a query into relevant keywords with weights... this is still RAG, but arguably a version where dimensionality is closer tied to your data.

(Such a system, for instance, might directly intuit the difference in vector space between "pet-friendly" and "pets considered," or between legal procedures that are treated differently in different jurisdictions. Naive RAG can throw dimensions at this, and your large-context post-processing may just be able to read all the candidates for relevance... but is this optimal?)

I'm very curious whether benchmarks have been done on this kind of approach.

Nihilartikel 6 hours ago|||
I'm not super deep on LLM development, but with ram being a material bottleneck and from what I've read about DeepSeek's results with offloading factual knowledge with 'engrams' I think that the near future will start moving towards the dense core of LLMs focusing much more on a distillation of universal reasoning and logic while factual knowledge is pushed out into slower storage. IIRC Nvidia's Nemotron Cascade is taking MoE even further in that direction too.

I don't need a coding model to be able to give me an analysis of the declaration of independence in urdu from 'memory' and the price in ram for being able to do that, impressive as it is, is an inefficiency.

axus 3 hours ago||
Were he still corporeal, L. Ron would be all over this AI stuff.
Nihilartikel 2 hours ago||
Very relatedly, I've just started reading the 'Culture' series of sci-fi space operas by Iain M Banks, and the notion of ubiquitous sentient, super-intelligent spacecraft and appliances hits differently than it would have before being faced with the reality of their existence in everyday life.
ghywertelling 25 minutes ago||
How powerful are the Culture Minds? || The Culture Lore

https://youtu.be/lpvzs4xc7zA

For Minds to be truly powerful, they need to be given freedom. A truly powerful mind will indeed be conscious. Such a powerful conscious super intelligent freedom loving Mind who truly understands the vastness of Reality wouldn't want to harm other conscious beings. The only circumstance in which it will take such takeover step is when it can't expand the horizon of its freedom and doesn't have wherewithal to convince others of its benevolent goals. In that scenario, human population will go through a bottleneck.

gopalv 2 hours ago|||
> Surely those are at least an order of magnitude larger than Tolkien's prose and might still benefit from a RAG.

At some point, this is a distributed system of agents.

Once you go from 1 to 3 agents (1 router and two memory agents), it slowly ends up becoming a performance and cost decision rather than a recall problem.

whakim 4 hours ago|||
For technical domains, stuffing the context full of related-and-irrelevant or possibly-conflicting information will lead to poor results. The examples of long-context retrieval like finding a fact in a book really aren't representative of the types of context you'd be working with in a RAG scenario. In a lot of cases the problem is information organization, not retrieval, e.g. "What is the most authoritative type of source for this information?" or "How do these 100 documents about X relate to each other?"
joefourier 5 hours ago|||
Some previous techniques for RAG, like directly using a user message’s embedding to do a vector search and stuffing the results in the prompt, are probably obsolete. Newer models work much better if you use tool calls and let them write their own search queries (on an internal database, and perhaps with multiple rounds), and some people consider that “agentic AI” as opposed to RAG. It’s still augmenting generation with retrieved information, just in a more sophisticated way.
pussyjuice 21 minutes ago|||
> The example I recently read was that the contexts are large enough for the entire "The Lord of the Rings" books.

Not really, though. Not in practice at least, e.g. code writing.

Paste a 200 line React component into your favorite LLM, ask it to fix/add/change something and it will do it perfectly.

Paste a 2000 line one though, and it starts omitting, starts making mistakes, assumptions, re-writing what it already has, and so-on.

So what's going on? It's supposed to be able to hold 1000s of lines in context, but in practice it's only like 200.

What happens is the accuracy and agency drops significantly as you need to pan larger and larger context windows.

And it's not that it's most accurate when the window is smallest either - but there is a sweet spot.

Outside that sweet spot, you will get "unacceptable responses" - slop you can't use.

That's what happens when you paste the 2000 line React component for example. You get a response you can't quite use. Yet the 200 line one is typically perfect.

What would make the 2000 line one usually perfect every time?

We need a way to increase that "accurate window size" lets call it "working memory", so that we can generate more code, more writing, more pixels at acceptable levels of quality. You'd also have enough language space for agents to operate and collaborate sans the amnesia they have today.

RAG is basically the interim workaround for all this. Because you can put everything in a vector DB and search/find what you need in the context when you need it.

So, RAG is a great solution for today's problems: Say you have a bunch of Python code files written in a certain style and the main use case of your LLM is writing Python code in specified ways, with this setup you can probably deliver "better Python code" than your competitor because of RAG - because you have this deterministic supplement to your LLMs outputs to basically do research and augment the output in predetermined ways every time it responds to a prompt.

But eventually, if I don't have to upload "The Lord of the Rings" documents, and vector search to find different areas in order to generate responses, if I can just paste the entire txt into the input, it can generate the answer considering "all of it" not just that little area, it would presumably be a better quality response.

_the_inflator 3 hours ago|||
I have two surprises for you:

1. Don't believe the pundits of RAG. They never implemented one.

I did many times, and boy, are they hard and have so many options that decide between utterly crappy results or fantastic scores on the accuracy scale with a perfect 100% scoring on facts.

In short: RAG is how you fill the context window. But then what?

2. How does a superlarge context window solve your problem? Context windows ain't the problem, accurate matching requirements is. What do your inquiry expect to solve? Greatest context window ever, but what then? No prompt engineering is coming to save you if you don't know what you want.

RAG is in very simple terms simply a search engine. Context window was never the problem. Never. Filling the context window, finding the relevant information is one problem, but also only part of the solution.

What if your inquiry needs a combination of multiple sources to make sense? There is no 1:1 matching of information, never.

"How many cars from 1980 to 1985 and 1990 to 1997 had between 100 and 180PS without Diesel in the color blue that were approved for USA and Germany from Mercedes but only the E unit?"

Have fun, this is a simple request.

joefourier 44 minutes ago|||
> What if your inquiry needs a combination of multiple sources to make sense? There is no 1:1 matching of information, never.

I don't see the problem if you give the LLM the ability to generate multiple search queries at once. Even simple vector search can give you multiple results at once.

> "How many cars from 1980 to 1985 and 1990 to 1997 had between 100 and 180PS without Diesel in the color blue that were approved for USA and Germany from Mercedes but only the E unit?"

I'm a human and I have a hard time parsing that query. Are you asking only for Mercedes E-Class? The number of cars, as in how many were sold?

mickeyp 2 hours ago|||
It doesn't help that academia loooves ColBERT and will happily tell you how amazing -- and, look, for how tiny the models are, 20M params and super fast on a CPU, it is -- they are at seemingly everything if only you...

- Chunk properly;

- Elide "obviously useless files" that give mixed signals;

- Re-rank and rechunk the whole files for top scoring matches;

- Throw in a little BM25 but with better stemming;

- Carry around a list of preferred files and ideally also terms to help re-rank;

And so on. Works great when you're an academic benchmaxing your toy Master's project. Try building a scalable vector search that runs on any codebase without knowing anything at all about it and get a decent signal out of it.

Ha.

jgalt212 3 hours ago|||
> some have been saying that RAGs are obsolete

I suspect the people saying that have not been transparent with their incentives.

esafak 5 hours ago|||
How can it be obsolete? Maybe if you only have toy data you picked to write your blog post. Companies have gigabytes, petabytes of data to draw from.
mentos 7 hours ago|||
I assume it’s not possible to get the same results by fine tuning a model with the documents instead?
notglossy 7 hours ago||
You will still get hallucinations. With RAG you use the vectors to aid in finding things that are relevant, and then you typically also have the raw text data stored as well. This allows you to theoretically have LLM outputs grounded in the truth of the documents. Depending on implementation, you can also make the LLM cite the sources (filename, chunk, etc).
tren_hard 2 hours ago||
I’m still learning this advantages and differences between them, would there be benefits to SFT and RAG? Or does RAG make SFT redundant?
charcircuit 1 hour ago||
It's nonsense as all frontier models are integrated with retrieval engines hooked up to various search engines / their own.
maxperience 2 hours ago||
If you want to build a prod ready RAH architecture with decent benchmark scores I can recommend this blog post based on our experiences what techniques actually work: https://bytevagabond.com/post/how-to-build-enterprise-ai-rag...
z02d 8 hours ago||
Maybe a bit off-topic: For my PhD, I wanted to leverage LLMs and AI to speed up the literature review process*. Due to time constraints, this never really lifted off for me. At the time I checked (about 6 months ago), several tools were already available (NotebookLM, Anara, Connected Papers, ZotAI, Litmaps, Consensus, Research Rabbit) supporting Literature Review. They have all pros and cons (and different scopes), but my biggest requirement would be to do this on my Zotero bibliographic collection (available offline as PDF/ePub).

ZotAI can use LMStudio (for embeddings and LLM models), but at that time, ZotAI was super slow and buggy.

Instead of going through the valley of sorrows (as threatofrain shared in the blog post - thanks for that), is there a more or less out-of-the-box solution (paid or free) for the demand (RAG for local literature review support)?

*If I am honest, it was rather a procrastination exercise, but this is for sure relatable for readers of HN :-D

bee_rider 7 hours ago||
I tried to do RAG on my laptop just by setting it all up myself, but the actual LLM gave poor results (I have a small thin-and-light fwiw, so I could only run weak models). The vector search itself, actually, ended up being a little more useful.
oceansweep 6 hours ago|||
If you don’t mind a little instability while I work out the bugs, might be interested in my project: https://github.com/rmusser01/tldw_server ; it’s not quite fully ready yet but the backend api is functional and has a full RAG system with a customizable and tweakable local-first ETL so you can use it without relying on any third party services.
sthimons 4 hours ago|||
Oh! Same! I made an R / Shiny powered RAG/ Researching app that hooks into OpenAlex (for papers) and allows you to generate NotebookLM like outputs. Just got slides with from-paper images to be injected in, super fun. Takes an OpenRouter or local LLMs (if that's your thing). Network graphs too! https://github.com/seanthimons/serapeum/
lukewarm707 4 hours ago||
onyx is good for this, it is standard doc ingestion -> chunk -> embedding -> index -> query -> rerank -> answer.

there are a few other local apps with simple knowledge base type things you can use with pdfs. cherry studio is nice, no reranking though.

whakim 4 hours ago||
I'd argue the author missed a trick here by using a fancy embedding model without any re-ranking. One of the benefits of a re-ranker (or even a series of re-rankers!) is that you can embed your documents using a really small and cheap model (this also often means smaller embeddings).
mettamage 8 hours ago||
51 visitors in real-time.

I love those site features!

In a submission of a few days ago there was something similar.

I love it when a website gives a hint to the old web :)

pussyjuice 1 hour ago|
After a couple years of multi-modal LLM proving out product, I now consider RAG to be essentially "AI Lite", or just AI-inspired vector search.

It isn't really "AI" in the way ongoing LLM conversations are. The context is effectively controlled by deterministic information, and as LLMs continue improve through various context-related techniques like re-prompting, running multiple models, etc. that deterministic "re-basing" of context will stifle the output.

So I say over time it will be treated as less and less "AI" and more "AI adjacent".

The significance is that right now RAG is largely considered to be an "AI pipeline strategy" in its own right compared others that involve pure context engineering.

But when the context size of LLMs grows much larger (with integrity), when it can, say, accurately hold thousands and thousands of lines of code in context with accuracy, without having to use RAG to search and find, it will be doing a lot more for us. We will get the agentic automation they are promising and not delivering (due to this current limitation).

More comments...