Posted by __rito__ 12/10/2025
This seems to be the result of the exercise? No evaluation?
My concern is that, even if the exercise is only an amusing curiosity, many people will take the results more seriously than they should, and be inspired to apply the same methods to products and initiatives that adversely affect people's lives in real ways.
That will most definitely happen. We already have known for awhile that algorithmic methods have been applied "to products and initiatives that adversely affect people's lives in real ways", for awhile: https://www.scientificamerican.com/blog/roots-of-unity/revie...
I guess the question is if LLMs for some reason will reinvigorate public sentiment / pressure for governing bodies to sincerely take up the ongoing responsibility of trying to lessen the unique harms that can be amplified by reckless implementation of algorithms.
> I was reminded again of my tweets that said "Be good, future LLMs are watching". You can take that in many directions, but here I want to focus on the idea that future LLMs are watching. Everything we do today might be scrutinized in great detail in the future because doing so will be "free". A lot of the ways people behave currently I think make an implicit "security by obscurity" assumption. But if intelligence really does become too cheap to meter, it will become possible to do a perfect reconstruction and synthesis of everything. LLMs are watching (or humans using them might be). Best to be good.
Can we take a second and talk about how dystopian this is? Such an outcome is not inevitable, it relies on us making it. The future is not deterministic, the future is determined by us. Moreso, Karpathy has significantly more influence on that future than your average HN user.We are doing something very *very* wrong if we are operating under the belief that this future is unavoidable. That future is simply unacceptable.
To properly execute this idea rather than to just toss it off without putting in the work to make it valuable is exactly what irritates me about a lot of AI work. You can be 900 times as productive at producing mental popcorn, but if there was value to be had here we're not getting it, just a whiff of it. Sure, fun project. But I don't feel particularly judged here. The funniest bit is the judgment on things that clearly could not yet have come to pass (for instance because there is an exact date mentioned that we have not yet reached). QA could be better.
I'm not worried about this project but instead harvesting, analyzing all that data and deanonymizing people.
That's exactly what Karparthy is saying. He's not being shy about it. He said "behave because the future panopticon can look into the past". Which makes the panopticon effectively exist now.
Be good, future LLMs are watching
...
or humans using them might be
That's the problem. Not the accuracy of this toy project, but the idea of monitoring everyone and their entire history.The idea that we have to behave as if we're being actively watched by the government is literally the setting of 1984 lol. The idea that we have to behave that way now because a future government will use the Panopticon to look into the past is absolutely unhinged. You don't even know what the rules of that world will be!
Did we forget how unhinged the NSA's "harvest now, decrypt later" strategy is? Did we forget those giant data centers that were all the news talked about for a few weeks?
That's not the future I want to create, is it the one you want?
To act as if that future is unavoidable is a failure of *us*
If such a thing isn't already possible (it is to a certain extent), we are headed towards a point where your words alone will be enough to fingerprint you.
Besides, the main problem of how difficult it is to deanonymize, not if possible.
Privacy and security both have to perfect defense. For example, there's no passwords that are unhackable. There are only passwords that cannot be hacked with our current technology, budgets, and lifetime. But you could brute force my HN password, it would just take billions of years.
The same distinction it's important here. My threat model on HN doesn't care if you need to spend millions of dollars nor thousands of hours to deanonymize me. My handle is here to discourage that and to allow me to speak more freely about certain topics. I'm not trying to hide from nation states, I'm trying to hide from my peers in AI and tech. So I can freely discuss my opinions, which includes criticizing my own community (something I think everyone should do! Be critical of the communities we associate with). And moreso I want people to consider my points on their merit alone, not on my identity nor status.
If I was trying to hide from nation states I'd do things very very differently, such as not posting on HN.
I'm not afraid of my handle being deanonymized, but I still think we should recognize the dangers of the future we are creating.
By oversimplifying you've created the position that this is a lost cause, as if we already lost and that because we lost we can't change. There are multiple fallacies here. The future has yet to be written.
If you really believe it is deterministic then what is the point to anything? To have desires it opinions? Are were just waiting to see which algorithm wins out? Or are we the algorithms playing themselves out? If it's deterministic wouldn't you be happy if the freedom algorithm won and this moment is an inflection in your programming? I guess that's impossible to say in an objective manner but I'd hope that's how it plays out
This is the main driver behind the targeted scams that ordinary people now have to deal with. It is why people get voice calls from loved ones in distress, why they get 'tech support' calls that aim to take over their devices and why lots of people have lost lots of money.
If you think I am too conspiracy theorist by making everything black and white that is maybe simply because we live different lives and have different experience.
If you believe in God of a certain kind, you don't think that being judged for your sins is unacceptable or even good or bad in itself, you consider it inevitable. We have already talked it over for 2000 years, people like the idea.
God is different though. People like God because they believe God is fair and infallible. That is not true for machines nor men. Similarly I do not think people will like this idea. I'm sure there will be some but look at people today and their religious fever. Or look in the past. They'll want it, but it is fleeting. Cults don't last forever, even when they're governments. Sounds like a great way to start wars. Every one will be easily justified
I've found the opposite, since these models still fail pretty wildly at nuance. I think it's a conceptual "needle in the haystack sort of problem.
A good test is to find some thread where there's a disagreement and have it try to analyze the discussion. It will usually strongly misrepresent what was being said, by each side, and strongly align with one user, missing the actual divide that's causing the disagreement (a needle).
It requires that the discussion has nuance, to see the failure. Gemini is, by far the, worst at this (which fits my suspicion that they heavily weighted reddit posts).
I don't think this is all that strange though. The human, on one side of the argument, is also missing the nuance, which is the source of the conflict. Is there a belief that AI has surpassed the average human, with conversational nuance!?
reading from the end isn't really useful, y'know :)