Top
Best
New

Posted by jbdamask 13 hours ago

Show HN: Now I Get It – Translate scientific papers into interactive webpages(nowigetit.us)
Understanding scientific articles can be tough, even in your own field. Trying to comprehend articles from others? Good luck.

Enter, Now I Get It!

I made this app for curious people. Simply upload an article and after a few minutes you'll have an interactive web page showcasing the highlights. Generated pages are stored in the cloud and can be viewed from a gallery.

Now I Get It! uses the best LLMs out there, which means the app will improve as AI improves.

Free for now - it's capped at 20 articles per day so I don't burn cash.

A few things I (and maybe you will) find interesting:

* This is a pure convenience app. I could just as well use a saved prompt in Claude, but sometimes it's nice to have a niche-focused app. It's just cognitively easier, IMO.

* The app was built for myself and colleagues in various scientific fields. It can take an hour or more to read a detailed paper so this is like an on-ramp.

* The app is a place for me to experiment with using LLMs to translate scientific articles into software. The space is pregnant with possibilities.

* Everything in the app is the result of agentic engineering, e.g. plans, specs, tasks, execution loops. I swear by Beads (https://github.com/steveyegge/beads) by Yegge and also make heavy use of Beads Viewer (https://news.ycombinator.com/item?id=46314423) and Destructive Command Guard (https://news.ycombinator.com/item?id=46835674) by Jeffrey Emanuel.

* I'm an AWS fan and have been impressed by Opus' ability to write good CFN. It still needs a bunch of guidance around distributed architecture but way better than last year.

183 points | 99 commentspage 4
alwinaugustin 9 hours ago|
There is a limit for 100 pages. Tried to upload the Architectural Styles and the Design of Network-based Software Architectures (REST - Roy T. Fielding) but it is 180 pages.
jbdamask 9 hours ago|
Good to know. There are also limits to context window of file size. These errors are emerging as people use the app. I'll add them to the FAQ.

The app doesn't do any chunking of PDFs

Vaslo 11 hours ago||
I’d love if this can be self-hosted, but i understand you may want to monetize it. I’ll keep checking back.
jbdamask 10 hours ago|
In some other apps, I've toyed around with charging for code access. Basically, a flat rate gets you into to the repo.

Would that interest you?

Personally, I hate subscription pricing and think we need more innovation in pricing models.

Vaslo 5 hours ago||
Yes I would be interested in that for sure, and I don’t have an issue with paying for the AI backend API too.
jbdamask 3 hours ago||
Doh! I didn’t think of that. Interesting idea.
sean_pedersen 10 hours ago||
very cool! would be useful if headings where linkable using anchor
jbdamask 10 hours ago|
Hmmmm...I think they are, sometimes. I could add that to the system prompt. Thanks
croes 11 hours ago||
Are documents hashed and the results cached?
jbdamask 9 hours ago|
It's much simpler than that: * HTMLs stored on S3, behind CloudFront * Links and metadata in DDB * Lambdas to handle everything
jbdamask 9 hours ago||
The app may be getting throttled. If you're waiting on a job, check back in a bit.
enos_feedler 13 hours ago||
can i spin this up myself? is the code anywhere? thanks!
ayhanfuat 12 hours ago||
I don't want to downplay the effort here but from my experience you can get yourself a neat interactive summary html with a short prompt and a good model (Opus 4.5+, Codex 5.2+, etc).
jbdamask 11 hours ago|||
Totally fair, I addressed this in my original post.
earthscienceman 11 hours ago|||
Can you give am example of the most useful prompting you find for this? I'd like to interact with papers just so I can have my attention held. I struggle to motivate myself to read through something that's difficult to understand
jbdamask 10 hours ago||
I replied to a comment above with the system prompt.

Something I've learned is that the standard, "Summarize this paper" doesn't do a great job because summaries are so subjective. But if you tell a frontier LLM, like Opus 4.6, "Turn this paper into an interactive web page highlighting the most important aspects" it does a really good job. There are still issues with over/under weighting the various aspects of a paper but the models are getting better.

What I find fascinating is that LLMs are great at translation so this is an experiment in translating papers into software, albeit very simple software.

jbdamask 12 hours ago||
No, it’s not open source. Not sure what I’m doing with it yet.

Can you give me more info on why you’d want to install it yourself? Is this an enterprise thing?

poly2it 12 hours ago||
It's down and it could be interesting to iterate on.
jbdamask 11 hours ago||
Fair. If you want to see the architecture, here's the DevLog: https://johndamask.substack.com/p/devlog-now-i-get-it
relaxing 8 hours ago||
I picked the “Attention is All You Need” example at the top, and wow it is not great!

Didn’t take long to find hallucination/general lack of intelligence:

> For each word, we compute three vectors: a Query (what am I looking for?), a Key (what do I contain?), and a Value (what do I give out?).

What? That’s the worst description of a key-value relationship I’ve ever read, unhelpful for understanding what the equation is doing, and just wrong.

> Attention(Q, K, V) = softmax( Q·Kᵀ / √dk ) · V

> 3 Mask (Optional) Block future positions in decoder

Not present in this equation, also not a great description of masking in a RNN.

> 5 × V Weighted sum of values = output

Nope!

https://nowigetit.us/pages/f4795875-61bf-4c79-9fbe-164b32344...

jbdamask 7 hours ago|
LLMs, even the best ones, are still hit or miss wrt quality. Constantly improving, though.

I see more confusion from Opus 4.x about how to weight the different parts of a paper in terms of importance than I see hallucinations of flat out incorrect stuff. But these things still happen.

hackernewds 7 hours ago||
surely, but it is a considerable concern? deflecting constructive feedback is probably not the best encouragement for others for a show HN?
jbdamask 7 hours ago||
Hmmm, didn’t realize I was deflecting - just stating facts. But if I came across that way then criticism noted.

If I turned this into a paid app then more attention would be given to quality. There’s only so much an app that leverages LLMs can do, though. With enough trace data and user feedback I could imagine building out Evals from failure modes.

I can think of a few ways to provide a better UX. One is already built-in - there’s a “Recreate” button the original uploader can click if they don’t like the result.

Things could get pretty sophisticated after that, such as letting the user tweak the prompt, allowing for section-by-section re-dos, changing models, or even supporting manual edits.

From a commercial product perspective, it’s interesting to think about the cost/benefit of building around the current limits of LLMs vs building for an experience and betting the models will get better. The question is where to draw the line and where to devote cycles. Something worthy of its own thread.

breakitmakeit 4 hours ago||
[dead]
nimbus-hn-test 12 hours ago||
[dead]
fancymcpoopoo 6 hours ago|
People will do anything except work
mpalmer 6 hours ago|
just look at you!