Top
Best
New

Posted by pmaze 1/10/2026

Show HN: I used Claude Code to discover connections between 100 books(trails.pieterma.es)
I think LLMs are overused to summarise and underused to help us read deeper.

I built a system for Claude Code to browse 100 non-fiction books and find interesting connections between them.

I started out with a pipeline in stages, chaining together LLM calls to build up a context of the library. I was mainly getting back the insight that I was baking into the prompts, and the results weren't particularly surprising.

On a whim, I gave CC access to my debug CLI tools and found that it wiped the floor with that approach. It gave actually interesting results and required very little orchestration in comparison.

One of my favourite trail of excerpts goes from Jobs’ reality distortion field to Theranos’ fake demos, to Thiel on startup cults, to Hoffer on mass movement charlatans (https://trails.pieterma.es/trail/useful-lies/). A fun tendency is that Claude kept getting distracted by topics of secrecy, conspiracy, and hidden systems - as if the task itself summoned a Foucault’s Pendulum mindset.

Details:

* The books are picked from HN’s favourites (which I collected before: https://hnbooks.pieterma.es/).

* Chunks are indexed by topic using Gemini Flash Lite. The whole library cost about £10.

* Topics are organised into a tree structure using recursive Leiden partitioning and LLM labels. This gives a high-level sense of the themes.

* There are several ways to browse. The most useful are embedding similarity, topic tree siblings, and topics cooccurring within a chunk window.

* Everything is stored in SQLite and manipulated using a set of CLI tools.

I wrote more about the process here: https://pieterma.es/syntopic-reading-claude/

I’m curious if this way of reading resonates for anyone else - LLM-mediated or not.

524 points | 146 commentspage 2
amadeuswoo 1/10/2026|
The feedback loop you describe—watching Claude's logs, then just asking it what functionality it wished it had—feels like an underexplored pattern. Did you find its suggestions converged toward a stable toolset, or did it keep wanting new capabilities as the trails got more sophisticated?
samuelknight 1/10/2026||
I do this all the time in my Claude code workflow: - Claude will stumble a few times before figuring out how to do part of a complex task - I will ask it to explain what it was trying to do, how it eventually solved it, and what was missing from its environment. - Trivial pointers go into the CLAUDE.md. Complex tasks go into a new project skill or a helper script

This is the best way to re-enforce a copilot because models are pretty smart most of the time and I can correct the cases where it stumbles with minimal cognitive effort. Over time I find more and more tasks are solved by agent intelligence or these happy path hints. As primitive as it is, CLAUDE.md is the best we have for long-term adaptive memory.

pmaze 1/10/2026||
I ended up judging where to draw the line. Its initial suggestions were genuinely useful and focused on making the basic tool use more efficient. e.g. complaining about a missing CLI parameter that I'd neglected to add for a specific command, requesting to let it navigate the topic tree in ways I hadn't considered, or new definitions for related topics. After a couple iterations the low hanging fruit was exhausted, and its suggestions started spiralling out beyond what I thought would pay off (like training custom embeddings). As long as I kept asking it for new ideas, it would come up with something, but with rapidly diminishing returns.
lkbm 1/10/2026||
Earlier today, I was thinking about doing something somewhat similar to this.

I was recently trying to remember a portal fantasy I read as a kid. Goodreads has some impressive lists, not just "Portal Fantasies"[0], but "Portal Fantasies where the portal is on water[1], and a seven more "where/what's the portal" categories like that.

But the portal fantasy I was seeking is on the water and not on the list.

LLMs have failed me so far, as has browsing the larger portal fantasy list. So, I thought, what if I had an LLM look through a list of kids books published in the 1990s and categorize "is this a portal fantasy?" and "which category is the portal?"

I would 1. possibly find my book and 2. possibly find dozens of books I could add to the lists. (And potentially help augment other Goodread-like sites.)

Haven't done it, but I still might.

Anyway, thanks for making this. It's a really cool project!

[0] https://www.goodreads.com/list/show/103552.Portal_Fantasy_Bo...

[1] https://www.goodreads.com/list/show/172393.Fiction_Portal_is...

znnajdla 1/11/2026||
This is really, really, good. Ignore the commenters in this thread who don’t see the connections. It takes a very high degree of artistic creativity and linguistic imagination to see these types of connections, and many of the “engineer types” on this forum are unfamiliar with that mode of thinking. Ignore them. Every one of these connected threads are really good.
catlifeonmars 1/11/2026|
It’s the opposite. The connections are so trivial/obvious as to be uninteresting.
nkrisc 1/11/2026||
I’m not surprised that it found connections when you told it to find connections. Most of those connections seem rather dubious to me. I think you’d have been better off coming up with these yourself.
drakeballew 1/11/2026||
This is a beautiful piece of work. The actual data or outputs seem to be more or less...trash? Maybe too strong a word. But perhaps you are outsourcing too much critical thought to a statistical model. We are all guilty of it. But some of these are egregious, obviously referential LLM dog. The world has more going on than whatever these models seem to believe.

Edit/update: if you are looking for the phantom thread between texts, believe me that an LLM cannot achieve it. I have interrogated the most advanced models for hours, and they cannot do the task to any sort of satisfactory end that a smoked-out half-asleep college freshman could. The models don't have sufficient capacity...yet.

liqilin1567 1/11/2026||
When I saw that the trail goes through just one word like "Us/Them", "fictions" I thought it might be more useful if the trail went through concepts.
tmountain 1/11/2026||
The links drawn between the books are “weaker than weak” (to quote Little Richard). This is akin to just thumbing the a book and saying, “oh, look, they used the word fracture and this other book used the word crumble, let’s assign a theme.” It’s a cool idea, but fails in the execution.
usefulposter 1/11/2026||
Yes. It's flavor-of-the-month Anthropic marketing drivel: tenuous word associations edition¹.

¹ Oh, that's just LLMs in general? Cool!

georgebcrawford 1/11/2026||
I spent 30 seconds and the first word that came to mind was drivel.

As an English teacher this shit makes me hate LLMs even more. Like so much techbro nonsense, it completely ignores what makes us human.

rtgfhyuj 1/11/2026|||
give it a more thorough look maybe?

https://trails.pieterma.es/trail/collective-brain/ is great

eloisius 1/11/2026|||
It’s any interesting thread for sure, but while reading through this I couldn’t help but think that the point of these ideas are for a person to read and consider deeply. What is the point of having a machine do this “thinking” for us? The thinking is the point.
DrewADesign 1/11/2026|||
And that’s the problem with a lot of chatbot usage in the wild: it’s saving you from having to think about things where thinking about them is the point. E.g. hobby writing, homework, and personal correspondence. That’s obviously not the only usage, but it’s certainly the basis for some of the more common use cases, and I find that depressing as hell.
rtgfhyuj 1/11/2026|||
so consider them deeply. Why does the value diminish if discovered by a machine as long as the value is in the thinking?
znnajdla 1/11/2026||||
This is a software engineering forum. Most of the engineer types here lack the critical education needed to appreciate this sort of thing. I have a literary education and I’m actually shocked at how good most of these threads are.
PinkMilkshake 1/11/2026|||
I think most engineer types avoid that kind of analysis on purpose.
znnajdla 1/11/2026||
Programmers tend to lean two ways: math-oriented or literature-oriented. The math types tend to become FAANG engineers. The literature oriented ones tend to start startups and become product managers and indie game devs and Laravel artisans.
only-one1701 1/11/2026|||
That doesn’t speak well towards your literary education, candidly.
znnajdla 1/11/2026||
We should try posting this on a literary discussion forum and see the responses there. I expect a lot of AI FUD and envy but that’ll be evidence in this tools favor.
only-one1701 1/11/2026||
lol yes that’s the only reason anyone could find this uh literary analysis less than compelling
pcrh 1/11/2026|||
I had a look at that. The notion of a "collective brain" is similar to that of "civilization". It is not a novel notion, and the connections shown there are trivial and uninspiring.
what-the-grump 1/11/2026|||
Build a rag with significant amount of text, extract it by key word topic, place, date, name, etc.

… realize that it’s nonsense and the LLM is not smart enough to figure out much without a reranker and a ton of technology that tells it what to do with the data.

You can run any vector query against a rag and you are guaranteed a response. With chunks that are unrelated any way.

electroglyph 1/11/2026||
unrelated in any way? that's not normal. have you tested the model to make sure you have sane output? unless you're using sentence-transformers (which is pretty foolproof) you have to be careful about how you pool the raw output vectors
baxtr 1/11/2026||
I checked 2-3 trails and have to agree.

Take for example the OODA loop. How are the connections made here of any use? Seems like the words are semantically related but the concept are not. And even if they are, so what?

I am missing the so what.

Now imagine a human had read all these books. It would have come up with something new, I’m pretty sure about that.

https://trails.pieterma.es/trail/tempo-gradient/

wry_durian 1/11/2026||
Indeed, I'm not seeing a "so what" here. LLMs make mental models cheap, but all models are wrong, and this one is too. The inclusion of Donalla Meadows' book and the quote from The Guns of August are particularly tenuous.
timoth3y 1/10/2026||
What meaningful connections did it uncover?

You have an interesting idea here, but looking over the LLM output, it's not clear what these "connections" actually mean, or if they mean anything at all.

Feeding a dataset into an LLM and getting it to output something is rather trivial. How is this particular output insightful or helpful? What specific connections gave you, the author, new insight into these works?

You correctly, and importantly point out that "LLMs are overused to summarise and underused to help us read deeper", but you published the LLM summary without explaining how the LLM helped you read deeper.

pmaze 1/10/2026||
The connections are meaningful to me in so far as they get me thinking about the topics, another lens to look at these books through. It's a fine balance between being trivial and being so out there that it seems arbitrary.

A trail that hits that balance well IMO is https://trails.pieterma.es/trail/pacemaker-principle/. I find the system theory topics the most interesting. In this one, I like how it pulled in a section from Kitchen Confidential in between oil trade bottlenecks and software team constraints to illustrate the general principle.

timoth3y 1/10/2026||
Can you walk me though some of the insights you gained? I've read several of those books, including Kitchen Confidential and Confessions of an Economic Hit Man, and I don't see the connection that the LLM (or you) is trying to draw. What is the deeper insight into these works that I am missing?

I'm not familiar with he term "Pacemaker Principle" and Google search was unhelpful. What does it mean in this context? What else does this general principle apply to?

I'm perfectly willing to believe that I am missing something here. But reading thought many of the supportive comments, it seems more likely that this is an LLM Rorschach test where we are given random connections and asked to do the mental work of inventing meaning in them.

I love reading. These are great books. I would be excited if this tool actually helps point out connections that have been overlooked. However, it does not seem to do so.

varenc 1/11/2026|||
> Can you walk me though some of the insights you gained?

This made me realize that so many influential figures have either absent fathers, or fathers that berated them or didn't give them their full trust/love. I think there's something to the idea that this commonality is more than coincidence. (that's the only topic of the site I've read through yet, and I ignored the highlighted word connections)

gchamonlive 1/10/2026|||
> we are given random connections and asked to do the mental work of inventing meaning in them

How is that different from having an insight yourself and later doing the work to see if it holds on closer inspection?

delusional 1/11/2026||
Don't ask me to elaborate on this, because it's kinda nebulous in my mind. I think there's a difference between being given an insight and interrogating that on your own initiative, and being given the same insight.
gchamonlive 1/11/2026||
I don't doubt there is a difference in the mechanism of arriving at a given connection. What I think it's not possible to distinguish is the connection that someone made intuitively after reading many sources and the one that the AI makes, because both will have to undergo scrutiny before being accepted as relevant. We can argue there could be a difference in quality, depth and search space, maybe, but I don't think there is an ontological difference.
fwip 1/11/2026||
The one that you thought of in the shower has a much greater chance of being right, and also of being relevant to you.
gchamonlive 1/11/2026||
Has it? Why?
fwip 1/11/2026||
Because humans aren't morons tasked with coming up with 100 connections.
gchamonlive 1/11/2026||
Doesn't explain why a connection made in the shower has in essence more merit than a connection an LLM was instructed to come up with.
fwip 1/12/2026||
Not sure how to make it clearer. Look at the quality of this post, and compare it to your shower thoughts. I imagine you're not as stupid as the machine was.
Aurornis 1/11/2026|||
I like design that highlights words in one summary and links them to highlights in the next. It's a cool idea

But so many of the links just don't make sense, as several comments have pointed out. Are these actually supposed to represent connections between books, or is it just a random visual effect that's suppose to imply they're connected?

I clicked on one category and it has "Us/Them" linked to "fictions" in the next summary. I get that it's supposed to imply some relationship but I can't parse the relationships

rjh29 1/10/2026||
100 books is too small a datasize - particularly given it's a set of HN recommendations (i.e. a very narrow and specific subset of books). A larger set would probably draw more surprising and interesting groupings.
DyslexicAtheist 1/10/2026||
> 100 books is too small a datasize

this to me sounds off. I read the same 8, to 10 books over and over and with every read discover new things. the idea of more books being more useful stands against the same books on repeat. and while I'm not religious, how about dudes only reading 1 book (the Bible, or Koran), and claiming that they're getting all their wisdom from these for a 1000 years?

If I have a library of 100+ books and they are not enough then the quality of these books are the problem and not the number of books in the library?

lisdexan 1/11/2026||
Finally, Schizophrenia as a Service (SaaS).
hecanjog 1/11/2026||
You really know what a good interface should be like, this is really inspiring. So is the design of everything I've seen on your website!

I won't pile on to what everyone else has said about the book connections / AI part of this (though I agree that part is not the really interesting or useful thing about your project) but I think a walk-through of how you approach UI design would be very interesting!

jennyholzer6 1/11/2026|
[dead]
More comments...