Posted by jbdamask 13 hours ago
Enter, Now I Get It!
I made this app for curious people. Simply upload an article and after a few minutes you'll have an interactive web page showcasing the highlights. Generated pages are stored in the cloud and can be viewed from a gallery.
Now I Get It! uses the best LLMs out there, which means the app will improve as AI improves.
Free for now - it's capped at 20 articles per day so I don't burn cash.
A few things I (and maybe you will) find interesting:
* This is a pure convenience app. I could just as well use a saved prompt in Claude, but sometimes it's nice to have a niche-focused app. It's just cognitively easier, IMO.
* The app was built for myself and colleagues in various scientific fields. It can take an hour or more to read a detailed paper so this is like an on-ramp.
* The app is a place for me to experiment with using LLMs to translate scientific articles into software. The space is pregnant with possibilities.
* Everything in the app is the result of agentic engineering, e.g. plans, specs, tasks, execution loops. I swear by Beads (https://github.com/steveyegge/beads) by Yegge and also make heavy use of Beads Viewer (https://news.ycombinator.com/item?id=46314423) and Destructive Command Guard (https://news.ycombinator.com/item?id=46835674) by Jeffrey Emanuel.
* I'm an AWS fan and have been impressed by Opus' ability to write good CFN. It still needs a bunch of guidance around distributed architecture but way better than last year.
The app doesn't do any chunking of PDFs
Would that interest you?
Personally, I hate subscription pricing and think we need more innovation in pricing models.
Something I've learned is that the standard, "Summarize this paper" doesn't do a great job because summaries are so subjective. But if you tell a frontier LLM, like Opus 4.6, "Turn this paper into an interactive web page highlighting the most important aspects" it does a really good job. There are still issues with over/under weighting the various aspects of a paper but the models are getting better.
What I find fascinating is that LLMs are great at translation so this is an experiment in translating papers into software, albeit very simple software.
Can you give me more info on why you’d want to install it yourself? Is this an enterprise thing?
Didn’t take long to find hallucination/general lack of intelligence:
> For each word, we compute three vectors: a Query (what am I looking for?), a Key (what do I contain?), and a Value (what do I give out?).
What? That’s the worst description of a key-value relationship I’ve ever read, unhelpful for understanding what the equation is doing, and just wrong.
> Attention(Q, K, V) = softmax( Q·Kᵀ / √dk ) · V
> 3 Mask (Optional) Block future positions in decoder
Not present in this equation, also not a great description of masking in a RNN.
> 5 × V Weighted sum of values = output
Nope!
https://nowigetit.us/pages/f4795875-61bf-4c79-9fbe-164b32344...
I see more confusion from Opus 4.x about how to weight the different parts of a paper in terms of importance than I see hallucinations of flat out incorrect stuff. But these things still happen.
If I turned this into a paid app then more attention would be given to quality. There’s only so much an app that leverages LLMs can do, though. With enough trace data and user feedback I could imagine building out Evals from failure modes.
I can think of a few ways to provide a better UX. One is already built-in - there’s a “Recreate” button the original uploader can click if they don’t like the result.
Things could get pretty sophisticated after that, such as letting the user tweak the prompt, allowing for section-by-section re-dos, changing models, or even supporting manual edits.
From a commercial product perspective, it’s interesting to think about the cost/benefit of building around the current limits of LLMs vs building for an experience and betting the models will get better. The question is where to draw the line and where to devote cycles. Something worthy of its own thread.