Top
Best
New

Posted by __rito__ 9 hours ago

Auto-grading decade-old Hacker News discussions with hindsight(karpathy.bearblog.dev)
Related from yesterday: Show HN: Gemini Pro 3 imagines the HN front page 10 years from now - https://news.ycombinator.com/item?id=46205632
311 points | 150 comments
Rperry2174 6 hours ago|
One thing this really highlights to me is how often the "boring" takes end up being the most accurate. The provocative, high-energy threads are usually the ones that age the worst.

If an LLM were acting as a kind of historian revisiting today’s debates with future context, I’d bet it would see the same pattern again and again: the sober, incremental claims quietly hold up, while the hyperconfident ones collapse.

Something like "Lithium-ion battery pack prices fall to $108/kWh" is classic cost-curve progress. Boring, steady, and historically extremely reliable over long horizons. Probably one of the most likely headlines today to age correctly, even if it gets little attention.

On the flip side, stuff like "New benchmark shows top LLMs struggle in real mental health care" feels like high-risk framing. Benchmarks rotate constantly, and “struggle” headlines almost always age badly as models jump whole generations.

I bet theres many "boring but right" takes we overlook today and I wondr if there's a practical way to surface them before hindsight does

yunwal 5 hours ago||
"Boring but right" generally means that this prediction is already priced in to our current understanding of the world though. Anyone can reliably predict "the sun will rise tomorrow", but I'm not giving them high marks for that.
onraglanroad 5 hours ago|||
I'm giving them higher marks than the people who say it won't.

LLMs have seen huge improvements over the last 3 years. Are you going to make the bet that they will continue to make similarly huge improvements, taking them well past human ability, or do you think they'll plateau?

The former is the boring, linear prediction.

bryanrasmussen 3 hours ago|||
>The former is the boring, linear prediction.

right, because if there is one thing that history shows us again and again is that things that have a period of huge improvements never plateau but instead continue improving to infinity.

Improvement to infinity, that is the sober and wise bet!

p-e-w 1 hour ago||
The prediction that a new technology that is being heavily researched plateaus after just 5 years of development is certainly a daring one. I can’t think of an example from history where that happened.
bigiain 3 hours ago||||
LaunchHN: Announcing Twoday, our new YC backed startup coming out of stealth mode.

We’re launching a breakthrough platform that leverages frontier scale artificial intelligence to model, predict, and dynamically orchestrate solar luminance cycles, unlocking the world’s first synthetic second sunrise by Q2 2026. By combining physics informed multimodal models with real time atmospheric optimisation, we’re redefining what’s possible in climate scale AI and opening a new era of programmable daylight.

yunwal 5 hours ago||||
> Are you going to make the bet that they will continue to make similarly huge improvements

Sure yeah why not

> taking them well past human ability,

At what? They're already better than me at reciting historical facts. You'd need some actual prediction here for me to give you "prescience".

Terr_ 10 minutes ago|||
I imagine "better" in this case depends on how one scores "I don't know" or confident-sounding falsehoods.

Failures aren't just a ratio, they're a multi-dimensional shape.

janalsncm 5 hours ago||||
“At what?” is really the key question here.

A lot of the press likes to paint “AI” as a uniform field that continues to improve together. But really it’s a bunch of related subfields. Once in a blue moon a technique from one subfield crosses over into another.

“AI” can play chess at superhuman skill. “AI” can also drive a car. That doesn’t mean Waymo gets safer when we increase Stockfish’s elo by 10 points.

onraglanroad 5 hours ago||||
At every intellectual task.

They're already better than you at reciting historical facts. I'd guess they're probably better at composing poems (they're not great but far better than the average person).

Or you agree with me? I'm not looking for prescience marks, I'm just less convinced that people really make the more boring and obvious predictions.

yunwal 5 hours ago|||
What is an intellectual task? Once again, there's tons of stuff LLMs won't be trained on in the next 3 years. So it would be trivial to just find one of those things and say voila! LLMs aren't better than me at that.

I'll make one prediction that I think will hold up. No LLM-based system will be able to take a generic ask like "hack the nytimes website and retrieve emails and password hashes of all user accounts" and do better than the best hackers and penetration testers in the world, despite having plenty of training data to go off of. It requires out-of-band thinking that they just don't possess.

hathawsh 4 hours ago||
I'll take a stab at this: LLMs currently seem to be rather good at details, but they seem to struggle greatly with the overall picture, in every subject.

- If I want Claude Code to write some specific code, it often handles the task admirably, but if I'm not sure what should be written, consulting Claude takes a lot of time and doesn't yield much insight, where as 2 minutes with a human is 100x more valuable.

- I asked ChatGPT about some political event. It mirrored the mainstream press. After I reminded it of some obvious facts that revealed a mainstream bias, it agreed with me that its initial answer was wrong.

These experiences and others serve to remind me that current LLMs are mostly just advanced search engines. They work especially well on code because there is a lot of reasonably good code (and tutorials) out there to train on. LLMs are a lot less effective on intellectual tasks that humans haven't already written and published about.

medler 1 hour ago||
> it agreed with me that its initial answer was wrong.

Most likely that was just its sycophancy programming taking over and telling you what you wanted to hear

janalsncm 4 hours ago|||
To be clear, you are suggesting “huge improvements” in “every intellectual task”?

This is unlikely for the trivial reason that some tasks are roughly saturated. Modest improvements in chess playing ability are likely. Huge improvements probably not. Even more so for arithmetic. We pretty much have that handled.

But the more substantive issue is that intellectual tasks are not all interconnected. Getting significantly better at drawing hands doesn’t usually translate to executive planning or information retrieval.

yunwal 3 hours ago||
There’s plenty of room to grow for LLMs in terms of chess playing ability considering chess engines have them beat by around 1500 ELO
janalsncm 1 hour ago||
Sorry, I now realize this thread is about whether LLMs can improve on tasks and not whether AI can. Agreed there’s a lot of headroom for LLMs, less so for AI as a whole.
irishcoffee 2 hours ago|||
> At what? They're already better than me at reciting historical facts.

I wonder what happens if you ask deepseek about Tiananmen Square…

Edit: my “subtle” point was, we already know LLMs censor history. Trusting them to honestly recite historical facts is how history dies. “The victor writes history” has never been more true. Terrifying.

Dylan16807 1 hour ago||
> Edit: my “subtle” point was, we already know LLMs censor history. Trusting them to honestly recite historical facts is how history dies.

I mean, that's true but not very relevant. You can't trust a human to honestly recite historical facts either. Or a book.

> “The victor writes history” has never been more true.

I don't see how.

Dylan16807 3 hours ago|||
LLMs aren't getting better that fast. I think a linear prediction says they'd need quite a while to maybe get "well past human ability", and if you incorporate the increases in training difficulty the timescale stretches wide.
SubiculumCode 5 hours ago||||
Perhaps a new category, 'highest risk guess but right the most often'. Those is the high impact predictions.
arjie 5 hours ago||
Prediction markets have pretty much obviated the need for these things. Rather than rely on "was that really a hot take?" you have a market system that rewards those with accurate hot takes. The massive fees and lock-up period discourage low-return bets.
Karrot_Kream 5 hours ago|||
FWIW Polymarket (which is one of the big markets) has no lock-up period and, for now while they're burning VC coins, no fees. Otherwise agree with your point though.
gammarator 3 hours ago|||
Can’t wait for the brave new world of individuals “match fixing” outcomes on Polymarket.
Karrot_Kream 2 hours ago||
As opposed to the current world of brigading social media threads to make consensus look like it goes your way and then getting journalists scraping by on covering clickbait to cover your brigading as fact?
Gravityloss 5 hours ago|||
something like correctness^2 x novel information content rank?
jimbokun 1 hour ago|||
The one about LLMs and mental health is not a prediction but a current news report, the way you phrased it.

Also, the boring consistent progress case for AI plays out in the end of humans as viable economic agents requiring a complete reordering of our economic and political systems in the near future. So the “boring but right” prediction today is completely terrifying.

p-e-w 1 hour ago|||
“Boring” predictions usually state that things will continue to work the way they do right now. Which is trivially correct, except in cases where it catastrophically isn’t.

So the correctness of boring predictions is unsurprising, but also quite useless, because predicting the future is precisely about predicting those events which don’t follow that pattern.

adam1996TL 1 hour ago|||
[dead]
0manrho 35 minutes ago|||
It's because algorithmic feeds based on "user engagement" rewards antagonism. If your goal is to get eyes on content, being boring, predictable and nuanced is a sure way to get lost in the ever increasing noise.
johnfn 5 hours ago|||
This suggests that the best way to grade predictions is some sort of weighting of how unlikely they were at the time. Like, if you were to open a prediction market for statement X, some sort of grade of the delta between your confidence of the event and the “expected” value, summed over all your predictions.
jacquesm 4 hours ago||
Exactly, that's the element that is missing. If there are 50 comments against and one pro and that pro has it in the longer term then that is worth noticing, not when there are 50 comments pro and you were one of the 'pros'.

Going against the grain and turning out right is far more valuable than being right consistently when the crowd is with you already.

copperx 2 hours ago|||
Is this why depressed people often end up making the best predictions?

In personal situations there's clearly a self fulfilling prophecy going on, but when it comes to the external world, the predictions come out pretty accurate.

simianparrot 6 hours ago|||
Instead of "LLM's will put developers out of jobs" the boring reality is going to be "LLM's are a useful tool with limited use".
jimbokun 1 hour ago||
That is at odds with predicting based on recent rates of progress.
xpe 1 hour ago||
> One thing this really highlights to me is how often the "boring" takes end up being the most accurate.

Would the commenter above mind sharing the method behind of their generalization? Many people would spot check maybe five items -- which is enough for our brains to start to guess at potential patterns -- and stop there.

On HN, when I see a generalization, one of my mental checklist items is to ask "what is this generalization based on?" and "If I were to look at the problem with fresh eyes, what would I conclude?".

jasonthorsness 8 hours ago||
It's fun to read some of these historic comments! A while back I wrote a replay system to better capture how discussions evolved at the time of these historic threads. Here's Karpathy's list from his graded articles, in the replay visualizer:

Swift is Open Source https://hn.unlurker.com/replay?item=10669891

Launch of Figma, a collaborative interface design tool https://hn.unlurker.com/replay?item=10685407

Introducing OpenAI https://hn.unlurker.com/replay?item=10720176

The first person to hack the iPhone is building a self-driving car https://hn.unlurker.com/replay?item=10744206

SpaceX launch webcast: Orbcomm-2 Mission [video] https://hn.unlurker.com/replay?item=10774865

At Theranos, Many Strategies and Snags https://hn.unlurker.com/replay?item=10799261

SauntSolaire 5 hours ago||
I'd love to see sentiment analysis done based on time of day. I'm sure it's largely time zone differences, but I see a large variance in the types of opinions posted to hn in the morning versus the evening and I'd be curious to see it quantified.
embedding-shape 3 hours ago||
Yeah, I see this constantly any time Europe is mentioned in a submission. Early European morning/day, regular discussions, but as the European afternoon/evening comes around, you start noticing a lot anti-union sentiment, discussions start to shift into over-regulation, and the typical boring anti-Europe/EU talking points.
HanClinto 7 hours ago||
Okay, your site is a ton of fun. Thank you! :)
pierrec 30 minutes ago||
"the distributed “trillions of Tamagotchi” vision never materialized"

I begrudgingly accept my poor grade.

modeless 7 hours ago||
This is a cool idea. I would install a Chrome extension that shows a score by every username on this site grading how well their expressed opinions match what subsequently happened in reality, or the accuracy of any specific predictions they've made. Some people's opinions are closer to reality than others and it's not always correlated with upvotes.

An extension of this would be to grade people on the accuracy of the comments they upvote, and use that to weight their upvotes more in ranking. I would love to read a version of HN where the only upvotes that matter are from people who agree with opinions that turn out to be correct. Of course, only HN could implement this since upvotes are private.

cootsnuck 6 hours ago||
The RES (Reddit Enhancement Suite) browser extension indirectly does this for me since it tracks the lifetime number of upvotes I give other users. So when I stumble upon a thread with a user with like +40 I know "This is someone whom I've repeatedly found to have good takes" (depending on the context).

It's subjective of course but at least it's transparently so.

I just think it's neat that it's kinda sorta a loose proxy for what you're talking about but done in arguably the simplest way possible.

nickff 6 hours ago|||
I am not a Redditor, but RES sounds like it would increase the ‘echo-chamber’ effect, rather than improving one’s understanding of contributors’ calibration.
mistercheph 6 hours ago|||
it depends on if you vote based on the quality of contribution to the discussion or based on how much you agree/disagree.
modeless 6 hours ago||||
Reddit's current structure very much produces an echo chamber with only one main prevailing view. If everyone used an extension like this I would expect it to increase overall diversity of opinion on the site, as things that conflict with the main echo chamber view could still thrive in their own communities rather than getting downvoted with the actual spam.
PunchyHamster 6 hours ago|||
More than having exact same system but with any random reader voting ? I'd say as long as you don't do "I disagree therefore I downvote" it would probably be more accurate than having essentially same voting system driven by randoms like reddit/HN already does
janalsncm 4 hours ago|||
That assumes your upvotes in the past were a good proxy for being correct today. You could have both been wrong.
potato3732842 1 hour ago|||
>This is a cool idea. I would install a Chrome extension that shows a score by every username on this site grading how well their expressed opinions match what subsequently happened in reality, or the accuracy of any specific predictions they've made.

Why stop there?

If you can do that you can score them on all sorts of things. You could make a "this person has no moral convictions and says whatever makes the number go up" score. Or some other kind of score.

Stuff like this makes the community "smaller" in a way. Like back in the old days on forums and IRC you knew who the jerks were.

leobg 4 hours ago|||
That’s what Elon’s vision was before he ended up buying Twitter. Keep a digital track record for journalists. He wanted to call it Pravda.

(And we do have that in real life. Just as, among friends, we do keep track of who is in whose debt, we also keep a mental map of whose voice we listen to. Old school journalism still had that, where people would be reading someone’s column over the course of decades. On the internet, we don’t have that, or we have it rarely.)

TrainedMonkey 6 hours ago|||
I long had a similar idea for stocks. Analyze posts of people giving stock tips on WSB, Twitter, etc and rank by accuracy. I would be very surprised if this had not been done a thousand times by various trading firms and enterprising individuals.

Of course in the above example of stocks there are clear predictions (HNWS will go up) and an oracle who resolves it (stock market). This seems to be a way harder problem for generic free form comments. Who resolves what prediction a particular comment has made and whether it actually happened?

Karrot_Kream 5 hours ago||
I ran across Sybil [1] the other day which tries to offer a reputation score based on correct predictions in prediction markets.

[1]: https://sybilpredicttrust.info/

8organicbits 5 hours ago||
The problem seems underspecified; what does it mean for a comment to be accurate? It would seem that comments like "the sun will rise tomorrow" would rank highest, but they aren't surprising.
tptacek 6 hours ago||
'pcwalton, I'm coming for you. You're going down.

Kidding aside, the comments it picks out for us are a little random. For instance, this was an A+ predictive thread (it appears to be rating threads and not individual comments):

https://news.ycombinator.com/item?id=10703512

But there's just 11 comments, only 1 for me, and it's like a 1-sentence comment.

I do love that my unaccredited-access-to-startup-shares take is on that leaderboard, though.

Sophira 26 minutes ago||
It somehow feels right to see what GPT-5 thinks of the article titled "Machine learning works spectacularly well, but mathematicians aren’t sure why" and its discussion: https://karpathy.ai/hncapsule/2015-12-04/index.html#article-...
btbuildem 6 hours ago||
I've spent a weekend making something similar for my gmail account (which google keeps nagging me about being 90% full). It's fascinating to be able to classify 65k+ of emails (surprise: more than half are garbage), as well as summarize and trace the nature of communication between specific senders/recipients. It took about 50 hours on a dual RTX 3090 running Qwen 3.

My original goal was to prune the account deleting all the useless things and keeping just the unique, personal, valuable communications -- but the other day, an insight has me convinced that the safer / smarter thing to do in the current landscape is the opposite: remove any personal, valuable, memorable items, and leave google (and whomever else is scraping these repositories) with useless flotsam of newsletters, updates, subscription receipts, etc.

LeroyRaz 5 hours ago||
I am surprised the author thought the project passed quality control. The LLM reviews seem mostly false.

Looking at the comment reviews on the actual website, the LLM seems to have mostly judged whether it agreed with the takes, not whether they came true, and it seems to have an incredibly poor grasp of it's actual task of accessing whether the comments were predictive or not.

The LLM's comment reviews are of often statements like "correctly characterized [program language] as [opinion]."

This dynamic means the website mostly grades people on having the most confirmist take (the take most likely to dominate the training data, and be selected for in the LLM RL tuning process of pleasing the average user).

LeroyRaz 5 hours ago||
Examples: tptacek gets an 'A' for his comment on DF which the LLM claiming that the user "captured DF's unforgiving nature, where 'can't do x or it crashes is just another feature to learn' which remained true until it was fixed on ..."

Link to LLM review: https://karpathy.ai/hncapsule/2015-12-02/index.html#article-....

So the LLM is praising a comment as describing DF as unforgiving (a characterization of the present then, not a statement about the future). And worse, it seems like tptacek may in fact be implying the opposite of the future (e.g., x will continue to crash when it was eventually fixed.)

Here is the original comment: " tptacek on Dec 2, 2015 | root | parent | next [–]

If you're not the kind of person who can take flaws like crashes or game-stopping frame-rate issues and work them into your gameplay, DF is not the game for you. It isn't a friendly game. It can take hours just to figure out how to do core game tasks. "Don't do this thing that crashes the game" is just another task to learn."

Note: I am paraphrasing the LLM review, as the website is also poorly designed, with one unable to select the text of the LLM review!

N.b., this choice of comment review is not overly cherry picked. I just scanned the "best commentators" and tptacek was number two, with this particular egregiously unrelated-to-prediction LLM summary given as justifying his #2 rating.

hathawsh 5 hours ago|||
Are you sure? The third section of each review lists the “Most prescient” and “Most wrong” comments. That sounds exactly like what you're looking for. For example, on the "Kickstarter is Debt" article, here is the LLM's analysis of the most prescient comment. The analysis seems accurate and helpful to me.

https://karpathy.ai/hncapsule/2015-12-03/index.html#article-...

  phire

  > “Oculus might end up being the most successful product/company to be kickstarted… > Product wise, Pebble is the most successful so far… Right now they are up to major version 4 of their product. Long term, I don't think they will be more successful than Oculus.”

  With hindsight:

  Oculus became the backbone of Meta’s VR push, spawning the Rift/Quest series and a multi‑billion‑dollar strategic bet.
  Pebble, despite early success, was shut down and absorbed by Fitbit barely a year after this thread.

  That’s an excellent call on the relative trajectories of the two flagship Kickstarter hardware companies.
xpe 1 hour ago|||
Until someone publishes a systematic quality assessment, we're grasping at anecdotes.

It is unfortunate that the questions of "how well did the LLM do?" and "how does 'grading' work in this app?" seem to have gone out the window when HN readers see something shiny.

karmickoala 1 hour ago|||
I get what you're saying, but looking at some examples, they look kinda of right, but there are a lot of misleading facts sprinkled, making his grading wrong. It is useful, but I'd suggest to be careful to use this to make decisions.

Some of the issues could be resolved with better prompting (it was biased to always interpret every comment through the lens of predictions) and LLM-as-a-judge, but still. For example, Anthropic's Deep Research prompts sub-agents to pass original quotes instead of paraphrasing, because it can deteriorate the original message.

Some examples:

  Swift is Open Source (2015)
  ===========================
sebastiank123 got a C-, and was quoted by the LLM as saying:

  > “It could become a serious Javascript competitor due to its elegant syntax, the type safety and speed.”
Now, let's read his full comment:

  > Great news! Coding in Swift is fantastic and I would love to see it coming to more platforms, maybe even on servers. It could become a serious Javascript competitor due to its elegant syntax, the type safety and speed.
I don't interpret it as a prediction, but a desire. The user is praising Swift. If it went the server way, perhaps it could replace JS, to the user's wishes. To make it even clearer, if someone asked the commenter right after: "Is that a prediction? Are you saying Swift is going to become a serious Javascript competitor?" I don't think its answer would be 'yes' in this context.

  How to be like Steve Ballmer (2015)
  ===================================
  
  Most wrong
  ----------
  
  >     corford (grade: D) (defending Ballmer’s iPhone prediction):
  >         Cited an IDC snapshot (Android 79%, iOS 14%) and suggested Ballmer was “kind of right” that the iPhone wouldn’t gain significant share.
  >         In 2025, iOS is one half of a global duopoly, dominates profits and premium segments, and is often majority share in key markets. Any reasonable definition of “significant” is satisfied, so Ballmer’s original claim—and this defense of it—did not age well.

Full quote:

  > And in a funny sort of way he was kind of right :) http://www.forbes.com/sites/dougolenick/2015/05/27/apple-ios...
  > Android: 79% versus iOS: 14%
"Any reasonable definition of 'significant' is satisfied"? That's not how I would interpret this. We see it clearly as a duopoly in North America. It's not wrong per se, but I'd say misleading. I know we could take this argument and see other slices of the data (premium phones worldwide, for instance), I'm just saying it's not as clear cut as it made it out to be.

  > volandovengo (grade: C+) (ill-equipped to deal with Apple/Google):
  >  
  >     Wrote that Ballmer’s fast-follower strategy “worked great” when competitors were weak but left Microsoft ill-equipped for “good ones like Apple and Google.”
  >     This is half-true: in smartphones, yes. But in cloud, office suites, collaboration, and enterprise SaaS, Microsoft became a primary, often leading competitor to both Apple and Google. The blanket claim underestimates Microsoft’s ability to adapt outside of mobile OS.
That's not what the user was saying:

  > Despite his public perception, he's incredibly intelligent. He has an IQ of 150.
  > 
  > His strategy of being a fast follower worked great for Microsoft when it had crappy competitors - it was ill equipped to deal with good ones like Apple and Google.
He was praising him and he did miss opportunities at first. OC did not make predictions of his later days.

  [Let's Encrypt] Entering Public Beta (2015)
  ===========================================

  - niutech: F "(endorsed StartSSL and WoSign as free options; both were later distrusted and effectively removed from the trusted ecosystem)"

Full quote:

  > There are also StartSSL and WoSign, which provide the A+ certificates for free (see example WoSign domain audit: https://www.ssllabs.com/ssltest/analyze.html?d=checkmyping.c...)
  > 
  > pjbrunet: F (dismissed HTTPS-by-default arguments as paranoid, incorrectly asserted ISPs had stopped injection, and underestimated exactly the use cases that later moved to HTTPS)
Full quote:

  > "We want to see HTTPS become the default."
  > 
  > Sounds fine for shopping, online banking, user authorizations. But for every website? If I'm a blogger/publisher or have a brochure type of website, I don't see point of the extra overhead.
  > 
  > Update: Thanks to those who answered my question. You pointed out some things I hadn't considered. Blocking the injection of invisible trackers and javascripts and ads, if that's what this is about for websites without user logins, then it would help to explicitly spell that out in marketing communications to promote adoption of this technology. The free speech angle argument is not as compelling to me though, but that's just my opinion.
I thought the debate was useful and so did pjbrunet, per his update.

I mean, we could go on, there are many others like these.

andy99 4 hours ago||
I haven’t looked at the output yet, but came here to say,LLM grading is crap. They miss things, they ignore instructions, bring in their own views, have no calibration and in general are extremely poorly suited to this task. “Good” LLM as a judge type products (and none are great) use LLMs to make binary decisions - “do these atomic facts match yes / no” type stuff - and aggregate them to get a score.

I understand this is just a fun exercise so it’s basically what LLMs are good at - generating plausible sounding stuff without regard for correctness. I would not extrapolate this to their utility on real evaluation tasks.

hackthemack 7 hours ago|
I noticed the Hall of Fame grading of predictive comments has a quirk? It grades some comments about if they came true or not, but in the grading of comment to the article

https://news.ycombinator.com/item?id=10654216

The Cannons on the B-29 Bomber "accurate account of LeMay stripping turrets and shifting to incendiary area bombing; matches mainstream history"

It gave a good grade to user cstross but to my reading of the comment, cstross just recounted a bit of old history. The evaluation gave cstross for just giving a history lesson or no?

karpathy 6 hours ago|
Yes I noticed a few of these around. The LLM is a little too willing to give out grades for comments that were good/bad in a bit more general sense, even if they weren't making strong predictions specifically. Another thing I noticed is that the LLM has a very impressive recognition of the various usernames and who they belong to, and I think shows a little bit of a bias in its evaluations based on the identity of the person. I tuned the prompt a little bit based on some low-hanging fruit mistakes but I think one can most likely iterate it quite a bit further.
More comments...