Top
Best
New

Posted by pmaze 1 day ago

Show HN: I used Claude Code to discover connections between 100 books(trails.pieterma.es)
I think LLMs are overused to summarise and underused to help us read deeper.

I built a system for Claude Code to browse 100 non-fiction books and find interesting connections between them.

I started out with a pipeline in stages, chaining together LLM calls to build up a context of the library. I was mainly getting back the insight that I was baking into the prompts, and the results weren't particularly surprising.

On a whim, I gave CC access to my debug CLI tools and found that it wiped the floor with that approach. It gave actually interesting results and required very little orchestration in comparison.

One of my favourite trail of excerpts goes from Jobs’ reality distortion field to Theranos’ fake demos, to Thiel on startup cults, to Hoffer on mass movement charlatans (https://trails.pieterma.es/trail/useful-lies/). A fun tendency is that Claude kept getting distracted by topics of secrecy, conspiracy, and hidden systems - as if the task itself summoned a Foucault’s Pendulum mindset.

Details:

* The books are picked from HN’s favourites (which I collected before: https://hnbooks.pieterma.es/).

* Chunks are indexed by topic using Gemini Flash Lite. The whole library cost about £10.

* Topics are organised into a tree structure using recursive Leiden partitioning and LLM labels. This gives a high-level sense of the themes.

* There are several ways to browse. The most useful are embedding similarity, topic tree siblings, and topics cooccurring within a chunk window.

* Everything is stored in SQLite and manipulated using a set of CLI tools.

I wrote more about the process here: https://pieterma.es/syntopic-reading-claude/

I’m curious if this way of reading resonates for anyone else - LLM-mediated or not.

471 points | 141 commentspage 6
wormpilled 1 day ago|
>A fun tendency is that Claude kept getting distracted by topics of secrecy, conspiracy, and hidden systems

Interesting... seems like it wants the keys on your system! ;)

typon 1 day ago||
The website design and content are much nicer than the "ideas" here. Just standard LLM slop once if you actually have read some of these books.
only-one1701 1 day ago||
This is an IQ test lol
jennyholzer6 15 hours ago|
[dead]
jereees 1 day ago||
now do this for research papers! fun stuff :)
pixelsmithy 14 hours ago||
Speaking of offloading thinking… I asked Gemini 3.0 Pro what it thought of this concept and criticisms in the comments: What are your thoughts on this project? Note criticisms in the comment section and tell me if you agree with any of them. https://news.ycombinator.com/item?id=46567400

This project, titled "Show HN: I used Claude Code to discover connections between 100 books," is an attempt to use an LLM (Claude) to perform "syntopical reading"—analyzing a library of 100 Hacker News favorite books to find thematic "trails" and connections between seemingly disparate texts. The author used CLI tools and LLM orchestration to index topics and generate these connections, aiming to move beyond simple summarization toward deeper insight.

Below are my thoughts on the project, followed by an analysis of the specific criticisms raised in the thread.

My Thoughts Conceptually, this is a fascinating experiment in "digital humanities." It attempts to automate a very high-level cognitive task: synthesis. Usually, we use computers for retrieval (search) or storage. Using them to find semantic bridges between concepts like "Jobs' reality distortion field" and "Theranos' fake demos" is a compelling use case for LLMs.

However, the execution reveals the current limits of this technology. The resulting "connections" often feel like a parlor trick—impressive that the machine did it, but often lacking the "click" of genuine insight. The project succeeds more as a technical visualization of vector embeddings than as a literary tool. It produces a map of linguistic proximity rather than conceptual necessity.

Criticisms & Agreement Analysis Here are the main criticisms from the comment section and my take on them:

1. The "Rorschach Test" / Spurious Connections Criticism: Users like tmountain, smusamashah, and timoth3y argue that the connections are "weaker than weak" or purely surface-level (e.g., linking "fracture" in one book to "crumble" in another). They suggest the project is an "LLM Rorschach test" where the human user forces meaning onto random noise generated by the model.

Do I agree? Yes. Reasoning: LLMs operate on statistical probability and vector similarity. They often confuse topical relatedness (these words appear in similar contexts) with causal or logical connection. A connection between "Us/Them" and "fictions" might make sense in a vector space, but to a human reader expecting a philosophical argument, it feels disjointed. Without the reasoning for the link being rigorous, the user has to do the heavy lifting to invent the connection, making the tool less of a "guide" and more of a "random prompt generator.”

2. Outsourcing Critical Thought Criticism: Users eloisius and DrewADesign argue that the project defeats the purpose of reading.[1] They contend that "the thinking is the point," and having a machine find connections robs the reader of the synthesis process that leads to understanding.

Do I agree? Partially. Reasoning: If the goal is learning, they are correct; you cannot learn by having a machine digest information for you. However, if the goal is discovery or research, this criticism is too harsh. Researchers have always used indices, concordances, and bibliographies to find connections they didn't know existed. If this tool is treated as a "super-index" rather than a "replacement reader," it has validity. The danger lies in mistaking the map (the AI's graph) for the territory (the actual knowledge).

3. Hallucinations and Conceptual Errors Criticism: User 8organicbits pointed out a weird label ("Thanos committing fraud" in a section about "useful lies") and questioned the logic of calling a fraud "useful" if the founder is in prison.

Do I agree? Yes. Reasoning: (Note: User Terretta clarified the commenter likely confused the comic villain Thanos with the company Theranos, which was in the text). However, the criticism about the label "useful lies" holds up. The LLM likely grouped "Theranos" under "useful lies" because the deception functioned for a time, but it lacks the nuance to understand that "fraud" and "useful tool" are categorically different to a human moral compass. This highlights the "alien" nature of LLM categorization—it organizes by semantic weight, not human logic.

4. "LLM Slop" and Fatigue Criticism: User typon and others noted the descriptions have a "distinct LLM voice" and dismissed it as "slop."[1] User Aurornis mentioned recognizing the writing style immediately.

Do I agree? Yes. Reasoning: By 2026 (the context of this thread), users are highly attuned to "AI-ese"—the perfectly grammatical but hollow, hedging, and overly enthusiastic tone of standard model outputs. This "slop" creates a trust deficit. When a human reads a connection written by a human, they assume intent. When they read one written by an LLM, they assume hallucination until proven otherwise. This high barrier to trust makes the project harder to enjoy.

Conclusion I agree with the consensus of the thread: Cool tech demo, shallow utility. The project is a great example of what LLMs can do (processing vast amounts of text to find patterns), but it inadvertently highlights what they cannot do (understand the deep, human significance of those patterns). It effectively automates the "what" but misses the "so what?"

djeastm 12 hours ago|
I might be in the minority, but I don't really want to come to HN to see what an LLM has to say about the topic.

Perhaps you might instead provide your own TL;DR after reading it yourself?

mannanj 1 day ago||
Seems like a lot of successful leaders have a history of or normalize deception and lying for some benefit.
joe_the_user 1 day ago||
A fun tendency is that Claude kept getting distracted by topics of secrecy, conspiracy, and hidden systems - as if the task itself summoned a Foucault’s Pendulum mindset.

It's all fun and game 'till someone loses an eye/mind/even-tenuous-connection-to-reality.

Edit: I'd mention that the themes Claude finds qualify as important stuff imo. But they're all pretty grim and it's a bit problematic focusing on them for a long period. Also, they are often the grimmest spin things that are well known.

drakeballew 1 day ago|
Don't believe Claude, let's put it that way.
jennyholzer6 15 hours ago||
What I'm taking from this post and the responses to it is that LLMs are used most enthusiastically by functionally illiterate people.

What the LLM eats doesn't make you shit.

durch 1 day ago|
[flagged]
glemion43 1 day ago||
I'm carrying a thought around for the last few weeks:

A LLM is a transformer. It transforms a prompt into a result.

Or a human idea into a concrete java implementation.

Currently I'm exploring what unexpected or curious transformations LLMs are capable of but haven't found much yet.

At least I myself was surprised that an LLM can transform a description of something into an IMG by transforming it into a SVG.

durch 1 day ago||
Format conversions (text → code, description → SVG) are the transformations most reach for first. To me the interesting ones are cognitive: your vague sense → something concrete you can react to → refined understanding. The LLM gives you an artifact to recognize against. That recognition ("yes, more of that" or "no, not quite") is where understanding actually shifts. Each cycle sharpens what you're looking for, a bit like a flywheel, each feeds into the next one.
golemotron 11 hours ago||
That's true, but it can be a trap. I recommend always generating a few alternatives to avoid our bias toward the first generation. When we don't do that we are led rather than leading.
calmoo 1 day ago|||
Ironically your comment is clearly written by an LLM.
durch 1 day ago||
Ironic indeed: pattern-matching the prose style instead of engaging the idea is exactly the shallow reading the post is about.
calmoo 1 day ago||
Your original comment is completely void of any substance or originality. Please don't fill the web with robot slop and use your own voice. We both know what you're doing here.
drekipus 1 day ago||
I dunno, he might have just been reading too much that he really writes like this now. I've seen it happen.
calmoo 1 day ago||
no, definitely not. It was 100% LLM written. Look at their post history.
sidrag22 1 day ago|||
> gets at something fundamental.

  :D
afro88 1 day ago||
LLMs are generators, and that was the correct way to view them at the start. Agents explore.
durch 1 day ago||
Generator vs. explorer is a useful distinction, but it's incomplete. Agents without a recognition loop are just generators with extra steps.

What makes exploration valuable is the cycle: act, observe, recognize whether you're closer to what you wanted, then refine. Without that recognition ("closer" or "drifting"), you're exploring blind.

Context is what lets the loop close. You need enough of it to judge the outcome. I think that real shift isn't generators → agents. It's one-shot output → iterative refinement with judgment in the loop.

throwawaySimon 1 day ago||
Please stop.
durch 1 day ago||
Something in there you'd like to discuss further, I've been thinking a lot about these ideas ever since LLMs came around, and I think these are many more of these discussion ahead of us...
throwawaySimon 1 day ago||
Kind of tedious trying to have a discussion with someone who clearly generates their part.
More comments...