Posted by pmaze 1 day ago
I built a system for Claude Code to browse 100 non-fiction books and find interesting connections between them.
I started out with a pipeline in stages, chaining together LLM calls to build up a context of the library. I was mainly getting back the insight that I was baking into the prompts, and the results weren't particularly surprising.
On a whim, I gave CC access to my debug CLI tools and found that it wiped the floor with that approach. It gave actually interesting results and required very little orchestration in comparison.
One of my favourite trail of excerpts goes from Jobs’ reality distortion field to Theranos’ fake demos, to Thiel on startup cults, to Hoffer on mass movement charlatans (https://trails.pieterma.es/trail/useful-lies/). A fun tendency is that Claude kept getting distracted by topics of secrecy, conspiracy, and hidden systems - as if the task itself summoned a Foucault’s Pendulum mindset.
Details:
* The books are picked from HN’s favourites (which I collected before: https://hnbooks.pieterma.es/).
* Chunks are indexed by topic using Gemini Flash Lite. The whole library cost about £10.
* Topics are organised into a tree structure using recursive Leiden partitioning and LLM labels. This gives a high-level sense of the themes.
* There are several ways to browse. The most useful are embedding similarity, topic tree siblings, and topics cooccurring within a chunk window.
* Everything is stored in SQLite and manipulated using a set of CLI tools.
I wrote more about the process here: https://pieterma.es/syntopic-reading-claude/
I’m curious if this way of reading resonates for anyone else - LLM-mediated or not.
https://medium.com/gft-engineering/using-text-embeddings-and...
Validation is a problem here - you find relationships, but so what? Is it right.... I can't say. It is interesting though.
I really appreciate you mentioning this. I think this is the nature of LLMs in general. Any symbol it processes can affect its reasoning capabilities.
Someone called the output "egregious, obviously referential LLM dog". And someone else called this trail an example of that: https://trails.pieterma.es/trail/tempo-gradient/
What struck me when reading it was that I parachuted into that trail with zero context. So I asked an LLM to generate some surrounding context for me: this is the verbatim prompt that I just gave gpt-5.2/medium running in my own agentic harness:
> read this: https://trails.pieterma.es/trail/tempo-gradient/. then i want you to write a short essay that shows the relationship between the different quotes making sure to use each quote in the process. the narrative should help me understand the links between the different quotes in each book in context. feel free to use your own knowledge of the books and your general knowledge to make this an approachable thing. what another llm did here is cool - it found relationships between this set of quotes. now i want you to materialize them into a compelling essay that invites curiosity and teaches me something. write the essay to tempo-gradient.md and respond in chat with how you thought about the problem and whether you thought these relationships were meaningful.
This was my agent's response:
Wrote the essay to tempo-gradient.md.
How I thought about it: I treated “tempo gradient” as the shared mechanism behind all eight excerpts—mismatch in loop speed (notice → interpret → decide → act) caused by attention, infrastructure latency, organizational rigidity, system delays, stealthy manipulation, or runaway acceleration. Then I ordered the quotes from “inner time” (tennis/attention) → engineered milliseconds (HFT) → explicit doctrine (OODA) → industrial schedules (mobilization) → structural delays (Beer Game/economy) → slow covert tempo (cyber) → extreme fast tempo (AI takeoff), so each quote sets up the next.
Whether the relationships felt meaningful: yes, mostly. They’re not “the same idea” so much as the same shape appearing in different contexts—feedback delay and decision cadence determining who gets to set the terms of reality for everyone else. The only relationship that feels a bit more interpretive than direct is tennis ↔ geopolitics, but even there the bridge (“how many nows” as a controllable input to decision tempo) is surprisingly sturdy.
And this is the output: https://gist.github.com/jflam/49753b7da64a74f07e35f6e2465f1a...
Conclusion: you find wisdom in everything if you look for it.
>The Law of Fives states simply that: ALL THINGS HAPPEN IN FIVES, OR ARE DIVISIBLE BY OR ARE MULTIPLES OF FIVE, OR ARE SOMEHOW DIRECTLY OR INDIRECTLY APPROPRIATE TO 5.
>The Law of Fives is never wrong.
>In the Erisian Archives is an old memo from Omar to Mal-2: "I find the Law of Fives to be more and more manifest the harder I look."