Top
Best
New

Posted by pmaze 1/10/2026

Show HN: I used Claude Code to discover connections between 100 books(trails.pieterma.es)
I think LLMs are overused to summarise and underused to help us read deeper.

I built a system for Claude Code to browse 100 non-fiction books and find interesting connections between them.

I started out with a pipeline in stages, chaining together LLM calls to build up a context of the library. I was mainly getting back the insight that I was baking into the prompts, and the results weren't particularly surprising.

On a whim, I gave CC access to my debug CLI tools and found that it wiped the floor with that approach. It gave actually interesting results and required very little orchestration in comparison.

One of my favourite trail of excerpts goes from Jobs’ reality distortion field to Theranos’ fake demos, to Thiel on startup cults, to Hoffer on mass movement charlatans (https://trails.pieterma.es/trail/useful-lies/). A fun tendency is that Claude kept getting distracted by topics of secrecy, conspiracy, and hidden systems - as if the task itself summoned a Foucault’s Pendulum mindset.

Details:

* The books are picked from HN’s favourites (which I collected before: https://hnbooks.pieterma.es/).

* Chunks are indexed by topic using Gemini Flash Lite. The whole library cost about £10.

* Topics are organised into a tree structure using recursive Leiden partitioning and LLM labels. This gives a high-level sense of the themes.

* There are several ways to browse. The most useful are embedding similarity, topic tree siblings, and topics cooccurring within a chunk window.

* Everything is stored in SQLite and manipulated using a set of CLI tools.

I wrote more about the process here: https://pieterma.es/syntopic-reading-claude/

I’m curious if this way of reading resonates for anyone else - LLM-mediated or not.

524 points | 146 commentspage 6
sgt101 1/11/2026|
Interesting - ages ago I used SML to look at the relationships between Shakespears plays.

https://medium.com/gft-engineering/using-text-embeddings-and...

Validation is a problem here - you find relationships, but so what? Is it right.... I can't say. It is interesting though.

iamzamek 1/11/2026||
This is so cool!
pennaMan 1/11/2026||
this is amazingly cool, great work!
localhost 1/11/2026||
First thank you for coming up with this idea. Love it. In the spirit of "tools for thought" I think this scratches an important itch.

Someone called the output "egregious, obviously referential LLM dog". And someone else called this trail an example of that: https://trails.pieterma.es/trail/tempo-gradient/

What struck me when reading it was that I parachuted into that trail with zero context. So I asked an LLM to generate some surrounding context for me: this is the verbatim prompt that I just gave gpt-5.2/medium running in my own agentic harness:

> read this: https://trails.pieterma.es/trail/tempo-gradient/. then i want you to write a short essay that shows the relationship between the different quotes making sure to use each quote in the process. the narrative should help me understand the links between the different quotes in each book in context. feel free to use your own knowledge of the books and your general knowledge to make this an approachable thing. what another llm did here is cool - it found relationships between this set of quotes. now i want you to materialize them into a compelling essay that invites curiosity and teaches me something. write the essay to tempo-gradient.md and respond in chat with how you thought about the problem and whether you thought these relationships were meaningful.

This was my agent's response:

Wrote the essay to tempo-gradient.md.

How I thought about it: I treated “tempo gradient” as the shared mechanism behind all eight excerpts—mismatch in loop speed (notice → interpret → decide → act) caused by attention, infrastructure latency, organizational rigidity, system delays, stealthy manipulation, or runaway acceleration. Then I ordered the quotes from “inner time” (tennis/attention) → engineered milliseconds (HFT) → explicit doctrine (OODA) → industrial schedules (mobilization) → structural delays (Beer Game/economy) → slow covert tempo (cyber) → extreme fast tempo (AI takeoff), so each quote sets up the next.

Whether the relationships felt meaningful: yes, mostly. They’re not “the same idea” so much as the same shape appearing in different contexts—feedback delay and decision cadence determining who gets to set the terms of reality for everyone else. The only relationship that feels a bit more interpretive than direct is tennis ↔ geopolitics, but even there the bridge (“how many nows” as a controllable input to decision tempo) is surprisingly sturdy.

And this is the output: https://gist.github.com/jflam/49753b7da64a74f07e35f6e2465f1a...

wormpilled 1/10/2026||
>A fun tendency is that Claude kept getting distracted by topics of secrecy, conspiracy, and hidden systems

Interesting... seems like it wants the keys on your system! ;)

mannanj 1/10/2026||
Seems like a lot of successful leaders have a history of or normalize deception and lying for some benefit.
only-one1701 1/10/2026||
This is an IQ test lol
jennyholzer6 1/11/2026|
[dead]
jereees 1/10/2026|
now do this for research papers! fun stuff :)
More comments...