Top
Best
New

Posted by pmaze 1/10/2026

Show HN: I used Claude Code to discover connections between 100 books(trails.pieterma.es)
I think LLMs are overused to summarise and underused to help us read deeper.

I built a system for Claude Code to browse 100 non-fiction books and find interesting connections between them.

I started out with a pipeline in stages, chaining together LLM calls to build up a context of the library. I was mainly getting back the insight that I was baking into the prompts, and the results weren't particularly surprising.

On a whim, I gave CC access to my debug CLI tools and found that it wiped the floor with that approach. It gave actually interesting results and required very little orchestration in comparison.

One of my favourite trail of excerpts goes from Jobs’ reality distortion field to Theranos’ fake demos, to Thiel on startup cults, to Hoffer on mass movement charlatans (https://trails.pieterma.es/trail/useful-lies/). A fun tendency is that Claude kept getting distracted by topics of secrecy, conspiracy, and hidden systems - as if the task itself summoned a Foucault’s Pendulum mindset.

Details:

* The books are picked from HN’s favourites (which I collected before: https://hnbooks.pieterma.es/).

* Chunks are indexed by topic using Gemini Flash Lite. The whole library cost about £10.

* Topics are organised into a tree structure using recursive Leiden partitioning and LLM labels. This gives a high-level sense of the themes.

* There are several ways to browse. The most useful are embedding similarity, topic tree siblings, and topics cooccurring within a chunk window.

* Everything is stored in SQLite and manipulated using a set of CLI tools.

I wrote more about the process here: https://pieterma.es/syntopic-reading-claude/

I’m curious if this way of reading resonates for anyone else - LLM-mediated or not.

524 points | 146 commentspage 3
zkmon 1/11/2026|
Given the common goals of every book (fame and sales by grabbing user attention), the general themes and styles would have high similarity. It's like flowers with bright colors and nice shapes.

Orwelliian motives (sheer egoism, aesthetic enthusiasm, historical impulse and political purposes) are somewhat dated.

andai 1/11/2026||
I tried using Claude Web to help me understand a textbook recently.

The book was really big and it got stuck in "indexing". (Possibly broke the indexer?) But thanks to the CLI integration, it was able to just iteratively grep all the info it needed out of it. I found this very amusing.

Anthropic's article on retrieval emphasizes the importance of keyword search, since they often outperform embeddings depending on the query. Their own approach is a hybrid:

https://www.anthropic.com/engineering/contextual-retrieval

hising 1/10/2026||
Yeah, I had a similar idea, I used Open AI API to break down movies into the 3 act structure, narrative, pacing, character arcs etc and then trying to find movies that are similar using PostgreSQL with pgvector. The idea was to have another way to find movies I would like to watch next based on more than "similar movies" in IMDb. Threw some hours at it, but I guess it is a system that needs a lot of data, a lot of tokens and enormous amount of tweaking to be useful. I love your idea! I agree with you on that we could use LLM:s for this kind of stuff that we as humans are quite bad at.
frankdenbow 1/12/2026||
Love this, it is interesting to see the links between topics, with things like father son relationship. I have a long queue of books to read that this year I finally set aside planned time on the calendar to read and walk. There are specific topics I want to read about so even if it just helped with finding some experts from books around a topic, it would help me decide. I think you're onto something here.
djeastm 1/11/2026||
While I don't think the section titles or one-sentence summaries accurately reflect the rather tenuous textual connections, I did find it strangely intriguing to scroll through the paragraphs of the books and just catch different bits of ideas from these writers.

It's like grabbing a half-dozen books off the library shelf, opening to a random page in each, then flit through them, kind of like a "engineering nerd book sample platter".

Aurornis 1/10/2026||
It’s interesting how many of the descriptions have a distinct LLM-style voice. Even if you hadn’t posted how it was generated I would have immediately recognized many of the motifs and patterns as LLM writing style.

The visual style of linking phrases from one section to the next looks neat, but the connections don’t seem correct. There’s a link from “fictions” to “internal motives” near the top of the first link and several other links are not really obviously correct.

pmaze 1/10/2026||
The names & descriptions definitely have that distinct LLM flavour to them, regardless of which model I used. I decided to keep them, but as short as possible. In general, I find the recombination of human-written text to be the main interest.

There's two stages to the linking: first juxtaposing the excerpts, then finding and linking key phrases within them. I find the excerpts themselves often have interesting connections between them, but the key phrases can be a bit out there. The "fictions" to "internal motives" one does gel for me, given the theme of deceiving ourselves about our own motivations.

reedf1 1/10/2026||
Well even the post itself reads to me as AI generated
akshay326 1/12/2026||
This is pretty cool, I wanted to do something with my Readwise Reader too. Jinx today claude code created a 3D Neo4J visualizer tool for YC advice + my quotes collection. Code here - https://github.com/akshay326/quote-viz
rhgraysonii 1/11/2026||
You might enjoy my tool deciduous. It is for building knowledge trees and reference stuff exactly like this. The website tells a bit more http://notactuallytreyanastasio.github.io/deciduous/
fudged71 1/11/2026|
Interesting. Was this inspired by the "Context Graphs" concept discussed on X?
rhgraysonii 1/14/2026||
No, I don’t hang out at the nazi bar.
itsangaris 1/10/2026||
surprised to that "seeing like a state" didn't get included in the "legibility tax" category
guidoism 1/11/2026|
Nice! I've been using Claude Code and ChatGPT for something similar. My inspiration is Adler's concept of The Great Conversation and Adler's Propædia. I've been able to jump between books to read about the same concept from different author's perspectives.
Balgair 1/11/2026|
This is his Syntopicon for modern works, and automated. It's amazing, I've been wanting to do this for a while but haven't had the time.

I really think we all should sync up and talk more. I want to make this bigger.

More comments...