Top
Best
New

Posted by jamesponddotco 1/10/2026

Show HN: Librario, a book metadata API that aggregates G Books, ISBNDB, and more

TLDR: Librario is a book metadata API that aggregates data from Google Books, ISBNDB, and Hardcover into a single response, solving the problem of no single source having complete book information. It's currently pre-alpha, AGPL-licensed, and available to try now[0].

My wife and I have a personal library with around 1,800 books. I started working on a library management tool for us, but I quickly realized I needed a source of data for book information, and none of the solutions available provided all the data I needed. One might provide the series, the other might provide genres, and another might provide a good cover, but none provided everything.

So I started working on Librario, a book metadata aggregation API written in Go. It fetches information about books from multiple sources (Google Books, ISBNDB, Hardcover. Working on Goodreads and Anna's Archive next.), merges everything, and saves it all to a PostgreSQL database for future lookups. The idea is that the database gets stronger over time as more books are queried.

You can see an example response here[1], or try it yourself:

  curl -s -H 'Authorization: Bearer librario_ARbmrp1fjBpDywzhvrQcByA4sZ9pn7D5HEk0kmS34eqRcaujyt0enCZ' \
  'https://api.librario.dev/v1/book/9781328879943' | jq .
  
This is pre-alpha and runs on a small VPS, so keep that in mind. I never hit the limits in the third-party services, so depending on how this post goes, I’ll or will not find out if the code handles that well.

The merger is the heart of the service, and figuring out how to combine conflicting data from different sources was the hardest part. In the end I decided to use field-specific strategies which are quite naive, but work for now.

Each extractor has a priority, and results are sorted by that priority before merging. But priority alone isn't enough, so different fields need different treatment.

For example:

- Titles use a scoring system. I penalize titles containing parentheses or brackets because sources sometimes shove subtitles into the main title field. Overly long titles (80+ chars) also get penalized since they often contain edition information or other metadata that belongs elsewhere.

- Covers collect all candidate URLs, then a separate fetcher downloads and scores them by dimensions and quality. The best one gets stored locally and served from the server.

For most other fields (publisher, language, page count), I just take the first non-empty value by priority. Simple, but it works.

Recently added a caching layer[2] which sped things up nicely. I considered migrating from net/http to fiber at some point[3], but decided against it. Going outside the standard library felt wrong, and the migration didn't provide much in the end.

The database layer is being rewritten before v1.0[4]. I'll be honest: the original schema was written by AI, and while I tried to guide it in the right direction with SQLC[5] and good documentation, database design isn't my strong suit and I couldn't confidently vouch for the code. Rather than ship something I don't fully understand, I hired the developers from SourceHut[6] to rewrite it properly.

I've got a 5-month-old and we're still adjusting to their schedule, so development is slow. I've mentioned this project in a few HN threads before[7], so I’m pretty happy to finally have something people can try.

Code is AGPL and on SourceHut[8].

Feedback and patches[9] are very welcome :)

[0]: https://sr.ht/~pagina394/librario/

[1]: https://paste.sr.ht/~jamesponddotco/a6c3b1130133f384cffd25b3...

[2]: https://todo.sr.ht/~pagina394/librario/16

[3]: https://todo.sr.ht/~pagina394/librario/13

[4]: https://todo.sr.ht/~pagina394/librario/14

[5]: https://sqlc.dev

[6]: https://sourcehut.org/consultancy/

[7]: https://news.ycombinator.com/item?id=45419234

[8]: https://sr.ht/~pagina394/librario/

[9]: https://git.sr.ht/~pagina394/librario/tree/trunk/item/CONTRI...

140 points | 48 commentspage 2
moritzruth 1/11/2026|
What do you think about BookBrainz?

https://bookbrainz.org/

jamesponddotco 1/11/2026||
First time I'm seeing it, to be honest, but it looks interesting. I do plan on having an UI for Librario (built a few mockups yesterday[1][2][3]), and I think the idea is similar, but BookBrainz looks bigger in scope.

I could add them as an extractor, I suppose :thinking:

[1]: https://i.cpimg.sh/pexvlwybvbkzuuk8.png

[2]: https://i.cpimg.sh/eypej9bshk2udtqd.png

[3]: https://i.cpimg.sh/6iw3z0jtrhfytn2u.png

nmstoker 1/11/2026||
This is great - the service and that you're extending it and considering a UI.

Personally I would go with option 2 as the colour from the covers beats the anaemic feel of 1 and it seems more original than the search with grid below of 3.

jamesponddotco 1/11/2026||
Glad you liked the idea!

Number two is what my wife and I prefer too, and likely what's going to be chosen in the end.

WillAdams 1/11/2026||
Doesn't seem to have a very compleat dataset --- the first book I thought to lok for, Hal Clement's _Space Lash_ (originally published as _Small Changes_) is absent, and I didn't see the later collection _Music of Many Sphere_ either:

https://www.goodreads.com/book/show/939760.Music_of_Many_Sph...

mehdi1964 1/11/2026||
Nice approach! Merging metadata from multiple sources is tricky, especially handling conflicts like titles and covers. Curious how you plan to handle scalability as your database grows—caching helps, but will the naive field strategies hold with thousands of books?
jamesponddotco 1/11/2026|
Right now the meeting happens on the fly and then is cached. In the future I imagine the finished merge will be saved as JSON to the database, depending on which is more expensive, the merging or a database call.

Merging on the fly kinda works for the future too, for when data change or for when the merging process changes.

No idea what the future will hold. The idea is to pre-warm the database after the schema has been refactored, and once we have thousands of books from that, I’ll know for sure what to do next.

TLDR, there is a lot of “think and learn” as I go here, haha.

zvr 1/11/2026||
Would it be possible to use a SQLite file instead of a PostgreSQL instance? Or do you rely on some specific PostgreSQL functionality?
jamesponddotco 1/11/2026|
No, I decided pretty early on to make it database specific instead of more generic, so we do use some PostgreSQL features right now, like their UUIDv7 generation.

But once the database refactor is done, I wouldn’t say no to a patch that made the service database agnostic.

omederos 1/11/2026||
502 Bad Gateway :|
jamesponddotco 1/11/2026|
It seems someone found a bug that triggered a panic, and systemd failed to restart the service because the PID file wasn't removed. Fixed now, should be back online :)
sijirama 1/11/2026|
hella hella cool

goodluck