Top
Best
New

Posted by hn_acker 12/23/2025

AI Police Reports: Year in Review(www.eff.org)
192 points | 207 comments
futuraperdita 12/27/2025|
What worries me is that _a lot of people seem to see LLMs as smarter than themselves_ and anthropmorphize them into a sort of human-exact intelligence. The worst-case scenario of Utah's law is that when the disclaimer is added that the report is generated by AI, enough jurists begin to associate that with "likely more correct than not".
intended 12/27/2025||
Reading how AI is being approached in China, the focus is more on achieving day to day utilty, without eviscerating youth employment.

In contrast, the SV focus of AI has been about skynet / singularity, with a hype cycle to match.

This is supported by the lack of clarity on actual benefits, or clear data on GenAI use. Mostly I see it as great for prototyping - going from 0 to 1, and for use cases where the operator is highly trained and capable of verifying output.

Outside of that, you seem to be in the land of voodoo, where you are dealing with something that eerily mimics human speech, but you don't have any reliable way of finding out its just BS-ing you.

simonjgreen 12/27/2025|||
Do you have any links you could share to content you found especially insightful about AI use in China?
logicprog 12/27/2025|||
I don't know if it supports their particular point, but Machine Decision is Not Final seems like a very cool and interesting look at China's culture around AI:

https://www.urbanomic.com/book/machine-decision-is-not-final...

andrepd 12/27/2025||
In the West we have autonomous systems to commit genocide, detecting and murdering "enemy combatants" at scale, where "enemy combatant" is defined as "male between the ages of 15 and 55".

Sometimes I'm not so sure about any so-called moral superiority.

jfreds 12/27/2025||
Citation? Not saying you’re wrong but my time in defense left me very much with the opposite opinion (radar target acquisitions had to be approved by a human, always)
andrepd 12/27/2025||
https://www.972mag.com/lavender-ai-israeli-army-gaza/

There's an overview on Wikipedia too https://en.wikipedia.org/wiki/AI-assisted_targeting_in_the_G...

intended 12/27/2025|||
I’ve been hunting for a link I found here on HN, which discussed how policy /government elites in China looked at AI.

Sadly, the search for that link continues.

I did find these from SCMP and Foreign Policy, but there are better articles out there.

- https://foreignpolicy.com/2025/11/20/china-ai-race-jobs-yout...

- https://www.scmp.com/specialist-publications/special-reports...

mc32 12/27/2025||||
I’m not seeing the dichotomy as much as you do.

Are they not going to build a “skynet” in China? Second, building skynet doesn’t imply eviscerating youth employment.

On the other hand, automation of menial tasks does eviscerate all kinds of employment, not only youth emoloyment.

latentsea 12/27/2025|||
Well at least DeepMind is doing nifty things like solving the protein folding problem.
dataflow 12/27/2025|||
One problem here is "smarter" is an ambiguous word. I have no problem believing the average LLM has more knowledge than my brain; if that's what "smarter" means, them I'm happy to believe I'm stupid. But I sure doubt an LLM's ability to deduce or infer things, or to understand its own doubts and lack of knowledge or understanding, better than a human like me.
Verdex 12/27/2025||
Yeah my thought is that you wouldn't trust a brain surgeon who has read every paper on brain surgery ever written but who has never touched a scalpel.

Similarly, the claim is that ~90% of communication is nonverbal, so I'm not sure I would trust a negotiator who has seen all of written human communication but never held a conversation.

Marha01 12/27/2025|||
> a lot of people seem to see LLMs as smarter than themselves

Well, in many cases they might be right..

roenxi 12/27/2025|||
As far as I can tell from poking people on HN about what "AGI" means, there might be a general belief that the median human is not intelligent. Given that the current batch of models apparently isn't AGI I'm struggling to see a clean test of what AGI might be that a human can pass.
impossiblefork 12/27/2025|||
LLMs may appear to do well on certain programming tasks on which they are trained intensively, but they are incredibly weak. If you try to use an LLM to generate, for example, a story, you will find that it will make unimaginable mistakes. If you ask an LLM to analyze a conversation from the internet it will misrepresent the positions of the participants, often restating things so that they mean something different or making mistakes about who said what in a way that humans never do. The longer the exchange the more these problems are exacerbated.

We are incredibly far from AGI.

roenxi 12/27/2025|||
We do have AI systems that write stories [0]. They work. The quality might not be spectacular but if you've ever gone out and spent time reading fanfiction you'd have to agree there are a lot of rather terrible human writers too (bless them). It still hits this issue that if we want LLMs to compete with the best of humanity then they aren't there yet, but that means defining human intelligence as something that most people don't have access to.

> If you ask an LLM to analyze a conversation from the internet it will misrepresent the positions of the participants, often restating things so that they mean something different or making mistakes about who said what in a way that humans never do.

AI transcription & summary seems to be a strong point of the models so I don't know what exactly you're trying to get to with this one. If you have evidence for that I'd actually be quite interested because humans are so bad at representing what other people said on the internet it seems like it should be an easy win for an AI. Humans typically have some wild interpretations of what other people write that cannot be supported from what was written.

[0] https://github.com/google-deepmind/dramatron

impossiblefork 12/27/2025||
I haven't tried Dramatron, but my experience is that it isn't possible to do sensibly. With regard to the second part

>AI transcription & summary seems to be a strong point of the models so I don't know what exactly you're trying to get to with this one. If you have evidence for that I'd actually be quite interested because humans are so bad at representing what other people said on the internet it seems like it should be an easy win for an AI. Humans typically have some wild interpretations of what other people write that cannot be supported from what was written.

Transcription and summarization is indeed fine, but try posting a longer reddit or HN discussion you've been part of into any model of your choice and ask it to analyze it, and you will see severe errors very soon. It will consistently misrepresent the views expressed and it doesn't really matter what model you go for. They can't do it.

roenxi 12/27/2025||
I can see why they'd struggle, I'm not sure what you're trying to ask the model to do. What type of analysis are you expecting? If the model is supposed to represent the views expressed that would be a summary. If you aren't asking it for a summary what do you want it to do? Do you literally mean you want the model to perform conversational analysis (ie, https://en.wikipedia.org/wiki/Conversation_analysis#Method)?
impossiblefork 12/27/2025||
Usually I use the format "Analyze the following ...".

For simple discussions this is fine. For complex discussions, especially when people get into conflict-- whether that conflict is really complex or not, problems usually result. The big problems are that the model will misquote or misrepresent views-- attempted paraphrases that actually change the meaning, the ordinary hallucinations etc.

For stories the confusion is much greater. Much of it is due to the basic way LLMs work: stories have dialogue, so if the premise contains people not being able to speak each other's language problems come very soon. I remember asking some recent Microsoft Copilot variant to write some portal scenario-- some guys on vacation to Teneriffe rent a catamaran and end up falling through a hole in the world of ASoIAF and into the seas off Essos, where they obviously have a terrible time, and it kept forgetting that they don't know English.

This is of course not obviously relevant for what Copilot is intended for, but I feel that if you actually try this you will understand how far we are from something like AGI, because if things like OpenAIs or whoever's systems were in fact close, this would be close too. If we were close we'd probably see silly errors too, but it'd be different kinds of errors, things like not telling you the story you want, not ignoring core instructions or failing to understand conversations.

jfreds 12/27/2025|||
Your points about misquotes and language troubles are very valid and interesting. But a word of caution on your prompt: you’re asking a lot of the word “analyze” here; if the LLM responded that the thread had 15 comments by 10 unique authors, and a total of 2000 characters, I would classify that as a completely satisfactory answer (assuming the figures were correct) based on the query
roenxi 12/28/2025|||
> Usually I use the format "Analyze the following ...".

It doesn't surprise me that you're getting nonsense, that is an ill-formed request. The AI can't fulfil it because it isn't asking it to do anything. I'm in the same boat as an AI would be, I can't tell what outcome you want. I'd probably interpret it as "summarise this conversation" if someone asked that of me, but you seem to agree that AI are good at summery tasks so that doesn't seem like it would be what you want. If I had my troll hat on I'd give you a frequency analysis of the letters and call it a day which is more passive-aggressive than I'd expect of the AI, they tend to just blather when they get a vague setup. They aren't psychic, it is necessary to give them instructions to carry out.

mbesto 12/27/2025||||
> We are incredibly far from AGI.

This and we don't actually know what the foundation models are for AGI, we're just assuming LLMs are it.

closewith 12/27/2025|||
This seems distant from my experience. Modern LLMs are superb at summarisation, far better than most people.
verisimi 12/27/2025||||
> there might be a general belief that the median human is not intelligent

This is to deconstruct the question.

I don't think it's even wrong - a lot of people are doing things, making decisions, living life perfectly normally, successfully even, without applying intelligence in a personal way. Those with socially accredited 'intelligence' would be the worst offenders imo - they do not apply their intelligence personally but simply massage themselves and others towards consensus. Which is ultimately materially beneficial to them - so why not?

For me 'intelligence' would be knowing why you are doing what you are doing without dismissing the question with reference to 'convention', 'consensus', someone/something else. Computers can only do an imitation of this sort of answer. People stand a chance of answering it.

Dilettante_ 12/27/2025||
>knowing why you are doing what you are doing[...] Computers can only do an imitation of this sort of answer. People stand a chance of answering it.

I'm not following. A computer's "why" is a written program, surely that is the most clear expression of its intent you could ask for?

verisimi 12/28/2025||
A computer doesn't determine the why, it is programmed to do so. It doesn't determine meaning or value from whatever-it-is.
Dilettante_ 12/28/2025||
Did you mean it doesn't set its own goals? Or what did you mean by "determine the why" if not a stack trace of its motivations(which is to say, its programming)? Could you give an example of determinimg meaning or value?
verisimi 12/28/2025||
Yes, set its own goals. Here's an example - say you wanted to track your spending, you might create a spreadsheet to do so. The spreadsheet won't write itself. If you want, you could perhaps task an ai to monitor and track spending - but it doesn't care. It is the human that cares/feels/values whatever-it-is. Computers are not that type.

Is your position that humans are pretty mechanistic, and simply playing out their programming, like computers? And that they can provide a stacktrace for what they do?

If so, this is what I was getting at with my initial comment. Most people do not apply their intelligence personally - they are simply playing out the goals that we inserted into them (by parents, society). There are alternative possibilities, but it seems that most people's operational procedures and actions are not something they have considered or actively sought.

Dilettante_ 12/28/2025||
>Is your position that humans are [...] simply playing out their programming?

Yes, at least it's what I wanted to drill further into.

Boiled down, I'm interested in hearing where "intelligent" people derive their motivations(I'm in agreement that most people are on ["non-intelligent" if you will] auto-pilot most of the time) if not from outside themselves, in your framework.

When does a goal start being my intelligent own goal? Any impetus for something can be traced back to not-yourself: I might decide to start tracking my spending, but that decision doesn't form out of the void. Maybe I value frugality, but I did not create that value in myself. It was instilled in me by experience, or my peers, etc. I see no way for one to "spontaneously" form a motivation, or if I wanted to take it one step further(into the Buddha's territory), I would have to question who, and where, and what this "self" even is.

verisimi 12/29/2025||
Here's a question for you. Imagine a child who was well looked after (fed and loved) but didn't go to school for 12+ years. Now imagine the same person who from the age of 5-6 followed the usual path of 12+ years of schooling. Which person do you imagine would be more fully themselves, the more complete expression of whatever was already inside? If the schooled person did a PhD too (so another 6 years) would that help or hinder them from becoming themselves?

To me, the answer is obvious. Inserting thousands of ideas and patterns of thoughts into a person will be unlikely to help them become a true expression of their nature. If you know gardening, the schooled person is more like a trained tree - grown in a way that suits the farmer - the more tied back the tree is, the less free it is.

As I see it, each individual is unique, with a soul. Each is capable of reaching a full expression of itself, by itself. What I also see is that there are many systems that are intentional manipulations, put in place in order to farm individuals at the individual's expense. The more education one receives, the more amenable one is to being 'farmed' according to the terms that were inserted. To me, this is the installation of an unnatural and servile mentality, which once adopted makes the person easy to harness - the person will even think being harnessed and 'in service' is right and good.

The problem is that these principles were not their own. These are like religious beliefs, and unlike principles founded according to personal experience. Received principles will always be unnatural. Acting according to them, is to act in an inauthentic way. However, there is no material reason to address the inauthenticity, as when one looks around, everyone else is doing the same. This results in a self-supporting, collective delusion.

In my view there are answers to what the self is - but 'society' cannot teach you them - it can only fill you with delusions. Imo, you would be on a better footing to forget everything you think you know (this costs you nothing) and do something like apply the scientific method personally - let your personal experiences guide you. Know the difference between 'knowing' because of experience and 'belief' because you were taught it. Even more simply, know thyself.

Dilettante_ 12/29/2025||
My position is that we are nothing but our circumstances(I'm assuming that we're in agreement that genetics, pre-birth nutrition etc, are part of these circumstances and not of the 'soul' you're after?), or to put it more directly: We are our circumstances. Our Soul Is That. There is nothing that is "already inside".

The tree does not exist in isolation, separate from the patterns of rain and sunshine that shape its growth. "The separation is an illusion".

I have indeed been on the same path as you of trying to shed delusions and applying the scientific method, and have up to this point found no indication of any "causeless cause" to steer me besides the fundamental is-ness of the universe.

Put bluntly, I believe that if you hadn't started with the assumption of a soul, you would be entirely unable to arrive at the conclusion of a soul by rational methods. And starting by assuming the unproven instead of emptyness is epistemological cheating.

verisimi 12/29/2025||
> There is nothing that is "already inside".

Have you seen babies, or puppies? You would easily be able to confirm for yourself that creatures are born with distinct personalities. Its not just chemistry or nurture.

> "The separation is an illusion"

But you don't really think this. You don't really think you are a tree. You do think you are distinct.

Dilettante_ 12/29/2025||
>You would easily be able to confirm for yourself that creatures are born with distinct personalities

Refer to my previous post: "I'm assuming that we're in agreement that genetics, pre-birth nutrition etc, are part of these circumstances and not of the 'soul' you're after?"

That's not some mysterious transcendant soul, that's genetics. Literally the exact same thing as a computer program. Dog breeds are specifically bred(programmed) to exhibit certain character traits, for example.

>You don't really think you are a tree. You do think you are distinct.

You missed the point of the argument. Just as the tree is not separate from its circumstances, neither am I.

You brought up "know thyself" so I assumed we were pulling from a similar corpus and brought up "the illusion of separation" as a mutually familiar point that didn't need much elaboration, sorry about that.

Also, it's not so much that I "think" I am distinct, more that I "believe" it, to put it in the terms you used earlier. I am conditioned to consider certain things "me" and others not.

Really I am no more distinct from the tree than, say, my fingernail is distinct from my nosebone. They belong to the same Individual.

verisimi 12/30/2025||
> Dog breeds are specifically bred(programmed) to exhibit certain character traits, for example.

And yet all dogs have their own unique characters, no? They are not the same individual, right?

> You brought up "know thyself" so I assumed we were pulling from a similar corpus and brought up "the illusion of separation" as a mutually familiar point that didn't need much elaboration, sorry about that.

I don't know what corpus you refer to. Please explain if you like. I'm not basing what I'm saying on a corpus - of course I've read books, but I am giving you my personal view on things.

> Also, it's not so much that I "think" I am distinct, more that I "believe" it, to put it in the terms you used earlier. I am conditioned to consider certain things "me" and others not.

I have heard this sort of (nondual) thinking before and completely dispute it. I personally cannot access anyone else's mind or body, I haven't no idea what you are thinking. I can only pretend to be doing this. There is a self, we live it continuously. There are times when we are fully present, where we are so in the immediate experience, that we can move out of linguistic/common concepts perhaps, but this is still within oneself.

For me, it is more that each person is a world in their own right, rather than "us" all being in the same universe. We simply do not have the level of interconnectivity you believe is there, when you say you are the tree or me. Furthermore, it really is very hard to see the point you are making when we have a disagreement - plainly there is a distinction.

Dilettante_ 12/30/2025||
You're either outright refusing or unable to see the point I've been making about the breeds: The traits are physically programmed in, whether individual or familial, not "already inside" the individual's soul. You aren't tracking that part of our conversation properly.

On the "corpus" point: It's not about not "giving my personal view", it's about drawing from a shared lexicon, of terminology, of lenses through which to view and analyze That Which We Are Talking About. My "home" in this respect is mostly in Hindu Yoga, (Zen) Buddhism and Daoism. You will find in those corpus-es(corpi?) essentially the exact conversation we're having right now, and find addressed the questions you have, in a wonderful plethora of different ways. Any other religion's mystic branch, or western occultism or alchemy similarly. If you want a specific recommendation for an entry-point, I could recommend giving the Bhagavad Gita a shot and seeing if you "vibe" with the way it explains things. If you skip the (usually) included commentary and only read the core translation, it ought to be a fairly quick read.

The nonduality point: Your body cannot access others' experience any more than my fingernail can access that of my nosebone, sure. But again, that does not mean they aren't part of the same organism. The fingernail and the nosebone do not make independent choices, the choice is made for them by the meta-organism(my body). Similarly, the argumentation might go, the tree and I do not make independent choices, but are governed by the same meta-organism(Nature, if you will, or perhaps "The Universe", but I suspect that term will turn you off since it might evoke the image of new-age-hippy woowoo).

I'm saying that if you insist that the body/mind/whatever you currently refer to as "you" is your "Self", you are taking "the fingernail" to be your Self instead of "the whole person". "Plainly there is a distinction", yes. But at the same time, there is also an underlying interconnectivity.

>Furthermore, it really is very hard to see the point you are making when we have a disagreement

That is perhaps the wisest thing either of us is going to say in this conversation. This format does not serve high-effort posting very well, I know I'm not doing the best I could be.

Perhaps we'd shelve this discussion for now? If you care to continue more deeply, you could shoot me an email at any point in the future(see my profile), and I again heartily recommend the Bhagavad Gita. Or perhaps, if you're more rational-thinking oriented, you might enjoy(the even shorter) Yogasutras of Patanjali. Or have you checked out Yudkowski's "Sequences"[1]? That one's completely down-to-eath, no spiritual terminology or metaphors (or non-dualism I'm pretty sure!), and covers a lot of the same ground my eastern background does.

[1]https://www.readthesequences.com/

verisimi 12/30/2025||
> You're either outright refusing or unable to see the point I've been making about the breeds: The traits are physically programmed in, whether individual or familial, not "already inside" the individual's soul. You aren't tracking that part of our conversation properly.

I don't dispute traits. But the traits idea fails to address the unique characteristics of each dog.

It seems I'm not tracking the things you want me to track, terminology, science, traits. But then, as I said in the first place:

> For me 'intelligence' would be knowing why you are doing what you are doing without dismissing the question with reference to 'convention', 'consensus', someone/something else.

I can tell you are sincere with your investigations, but I can't help wondering whether direct observations of reality, the development of a personal outlook on reality, use personal experience as primary source, is ultimately more valuable than familiarity with a corpus. But then I would say that. And you would disagree.

Dilettante_ 12/31/2025||
Again, you are not getting what I was saying about the corpus. I am pulling from a vocabulary to express my personal outlook from personal experience, from direct observation. It's not either/or. You are the one completely rejecting half of all power-of-truth-finding available to you, and calling it intelligent? I'm explaining mathematics to you and you're complaining that I'm leaning on centuries of established proofs instead of, what, inventing a new lexicon just for talking to you?

I am giving up. You are engaging with the points in your head instead of those on the page.

You match the spirit that you comprehend, not me.

figassis 12/27/2025|||
Being an intelligent being is not the same as being considered intelligent relative to the rest of your species. I think we’re just looking to create an intelligence, meaning, having the attributes that make a being intelligent, which mostly are the ability to reason and learn. I think the being might take over from there no?

With humans, the speed and ease with which we learn and reason is capped. I think a very dumb intelligence with stay dumb for not very long because every resource will be spent in making it smarter.

Timwi 12/27/2025|||
Why would the dumb intelligence be less constrained than a human in making itself smarter?
lazide 12/27/2025||
I have yet to see an LLM with hands, feet, or eyeballs.

Currently, LLMs require hooks and active engagement with humans to ‘do’ anything. Including learn.

gilrain 12/27/2025|||
> every resource will be spent in making it smarter

The root motivation on which every resource will be spent is simply and very obviously to make a profit.

chrz 12/27/2025||||
So tired of this argument.
computerthings 12/27/2025||||
[dead]
cortic 12/27/2025|||
> ChatGPT (o3): Scored 136 on the Mensa Norway test in April 2025

So yes, most people are right in that assumption, at least by the metric of how we generally measure intelligence.

ehnto 12/27/2025|||
Does an LLM scoring well on the Mensa test translate to it doing excellent and factual police reporting? It is probably not true of humans doing well on the Mensa, why would it be true of an LLM?

We should probably rigorously verify that, for a role that itself is about rigorous verification without reasonable doubt.

I can immediately, and reasonably, doubt the output of an LLM, pending verification.

gilrain 12/27/2025||||
> the metric of how [the uninformed] generally measure intelligence
cortic 1/7/2026||
How do the informed measure intelligence?

I know I'm too late to ask this question, But I suspect its either; Feelings and intuitions, which is just a primitive IQ test. Or some kind of aptitude test, which is just a different flavor of IQ test.

vid 12/27/2025||||
Court reports should as much be about human sensibility. I have met plenty of high IQ people who were insensitive.
cortic 12/27/2025||
Having listened to some the new AI generated songs on utube, looks like they might be better at being sensitive humans than we are as well..
gilrain 12/27/2025||
Where do you imagine they copied those human sensitivities from? The weather?
cortic 12/27/2025||
The same place as humans do, other humans.
turtlesdown11 12/27/2025|||
Yeah I certainly associate LLMs with high intelligence when they provide fake links to fake information, I think, man this thing is SMART
kylecazar 12/27/2025|||
Maybe it's just my circle, but anecdotally most of the non-CS folks I know have developed a strong anti-AI bias. In a very outspoken way.

If anything, I think they'd consider AI's involvement as a strike against the prosecution if they were on a jury.

Workaccount2 12/27/2025|||
A core problem with humans, or perhaps it's not even a problem, just something that takes a long time to recognize, is that they complain and hate on something that they continue to spend money on.

Not like food or clothing, but stuff like DLC content, streaming services, and LLMs.

pjc50 12/27/2025|||
Usually different people. Or, in the case of LLMs, they're not given a no option, or it's carefully hidden.
kylecazar 12/27/2025|||
At least in my case, I suspect they also don't keep up with the progress. They did experiments in 2023/24, were thoroughly put off, have not fired it up since. So the impression they have is frozen in time, a time when it was indeed much less impressive.
theoreticalmal 12/27/2025||||
Why do people in your circle not like AI? I have similar a experience about friends and family not liking AI, but usually it’s due to water and energy reasons, not because of an issue with the model reasoning
saghm 12/27/2025|||
If your circle has any artists in it, chances are they'll also have a very negative perception, although influenced heavily by the proliferation of AI-generated art.

At least personally, I've seen basically three buckets of opinions from non-technical people on AI. There's a decent-sized group of people who loathe anything to do with it due to issues you've mentioned, the art issue I mentioned, or other specific things that overall add up to the point that they think it's a net harm to society, a decent-sized group of people who basically never think about it at all or go out of their way to use anything related to it, and then a small group of people who claim to be fully aware of the limitations and consider themselves quite rational but then will basically ask ChatGPT about literally anything and trust what it says without doing any additional research. It's the last group that I'm personally most concerned about because I've yet to find any effective way of getting them to recognize the cognitive dissonance (although sometimes at least I've been able to make enough of an impression that they stop trying to make ChatGPT a participant in every single conversation I have with them).

kylecazar 12/27/2025||
Pretty much hit the nail on the head -- while there are some artists, most are from traditional broadly "intellectual" fields. Examples: writers, journalists, academia (liberal arts), publishing industry...
saghm 12/27/2025||
That's a good point; "art" might be a bit too narrow to accurately describe the type of field where people have fairly concrete concerns about how AI relates to what they produce. I'd be tempted to use the label "creative work", but even that doesn't quite feel like it's something that everyone would understand to include stuff like written journalism, which I think is likely to have pretty similar concerns.
roenxi 12/30/2025|||
AIs are an obvious threat to their ability to make money off their skills.
catlover76 12/27/2025|||
[dead]
KronisLV 12/27/2025|||
> a lot of people seem to see LLMs as smarter than themselves

I think the anthropomorphizing part is what messes with people. Is the autocomplete in my IDE smarter than I am? What about the search box on Google? What about a hammer or a drill?

Yet, I will admit that most of the time I hear people complaining about how AI written code is worse than that produced by developers, but it just doesn't match my own experience - it's frankly better (with enough guidance and context, say 95% tokens in and 5% tokens out, across multiple models working on the same project to occasionally validate and improve/fix the output, alongside adequate tooling) than what a lot of the people I know could or frankly do produce in practice.

That's a lot of conditions, but I think it's the same with the chat format - people accepting unvalidated drivel as fact, or someone using the web search and parsing documents and bringing up additional information that's found as a consequence of the conversation, bringing in external data and making use of the LLM ability to churn through a lot of it, sometimes better than the human reading comprehension would.

saghm 12/27/2025||
I think you're spot on here. It's the same idea as scammers and con artists; people can be convinced of things that they might rationally reject if the language is persuasive enough. This isn't some new exploit in human behavior or an epidemic of people who are less intelligent than before; we've just never had to deal with the amount plausible enough sounding coherent human language being almost literally unlimited before. If we're lucky, people will manage to adapt and update their mental models to be less trustworthy of things that they can't verify (like how most of us hopefully don't need to be concerned their older relatives will transfer their bank account contents to benevolent foreign royalties with the expectation of being rewarded handsomely). It's hard to feel especially confident in this though given how much more open-ended the potential deceptions are (without even getting into the question of "intent" from the models or the creators of them).
Verdex 12/27/2025||
My belief is that the function of a story is to provide social cover for our actions. Other people need to evaluate us (both in the moment and after the dust has settled) and while careful data analysis can do the job, who has time for that crap.

As such the story can be completely divorced from reality. The important thing is that the story is a good one. A good story transfers your social cover for yourself to your supervisor. They don't have to understand what you did and explain why it's okay that it failed. They just have to understand the story structure that you gave them. Listen to this great story, it's not my report's fault for this failure, and it's certainly not mine, just bad luck.

Additionally, the good (and sufficiently original) story is a gift because your supervisor can reuse it for new scenarios.

The good salesman gives you the story you need to excuse the purchase that will enable you to succeed. The bad salesman sells you on a story that you need a frivolous purchase.

And this is why job hoping is "bad". Eventually the incompetent employee uses up all of their good stories and management catches onto their act. It's embedded into our language. "Oh we've all heard this story before." The job hopper leaves just as their good stories are exhausted and can start over fresh at the new employer.

All of this in response to

> If we're lucky, people will manage to adapt and update their mental models to be less trustworthy of things that they can't verify

Yes, if we're lucky that is what will happen. But I fear that we're going to have to transition to a very low trust society for that to happen.

Reliance on the story is reliant on the trust that someone has done the real work. Distrust of the story implies a wider scale distrust in others and institutions.

Maybe we can add a tradition of annotating our stories with arguments and proofs. Although I've spent a two decade career desperately trying to give highly technical people arguments and proofs and I've seen stories completely unmoored from reality win out every time.

Optimistically, I'm just really bad at it and it's actually a natural transition. Pessimistically, we're in for a bumpy ride.

saghm 12/27/2025||
I'm not sure I'm quite as pessimistic as you, just because I tend to treat most predictions of how society will adapt to things as a whole as fairly low confidence, but I certainly don't disagree that it at least seems hard to imagine people getting past all this quickly.

The idea of story being how people justify making their decisions is interesting. I'm reminded of a couple of anecdotes my father has repeated a few times over the years about two distinct medical circumstances he's had. When he was first diagnosed with sleep apnea, he apparently was very skeptical that he had any reason to do anything because the sleep doctor told him things like "this will help you be less sleepy during the day" and "you won't start nodding off as you drive" when he didn't feel like either of those experiences happened to him. Eventually a different sleep doctor did convince him it was worthwhile to treat, and he's used a CPAP since then, he still seems not to feel like it would have made sense for him to start when he first got the diagnosis. Through the lens you've given, the original doctor didn't give him a compelling enough story to justify the effort on his part. On the other hand, the first time he talked to a nutritionist about changing his diet, he apparently mentioned something about how he wanted to at least be able to eat ice cream occasionally, even if it was less often, rather than not ever be able to eat it again, and the nutritionist replied "Of course! that would make life not worth living". He ended up being much more open to listening to the advice of the nutritionist than I would have expected, and I think it would be reasonable to argue that was because the nutritionist was able to give him a story that seemed compelling about what his life would be like with the suggested changes.

charcircuit 12/27/2025||
AI is smarter than everyone already. Seriously, the breadth of knowledge the AI possesses has no human counterpart.
eCa 12/27/2025|||
Just this weekend it (Gemini) has produced two detailed sets of instructions on how to connect different devices over bluetooth, including a video (that I didn’t watch), while the devices did not support doing the connections in that direction. No reasonable human reading the involved manuals would think those solutions feasible. Not impressed, again.
opan 12/27/2025||||
It's pretty similar to looking something up with a search engine, mashing together some top results + hallucinating a bit, isn't it? The psychological effects of the chat-like interface + the lower friction of posting in said chat again vs reading 6 tabs and redoing your search, seems to be the big killer feature. The main "new" info is often incorrect info.

If you could get the full page text of every url on the first page of ddg results and dump it into vim/emacs where you can move/search around quickly, that would probably be similarly as good, and without the hallucinations. (I'm guessing someone is gonna compare this to the old Dropbox post, but whatever.)

It has no human counterpart in the same sense that humans still go to the library (or a search engine) when they don't know something, and we don't have the contents of all the books (or articles/websites) stored in our head.

latexr 12/27/2025|||
> I'm guessing someone is gonna compare this to the old Dropbox post, but whatever.

If they do, you’ll be in good company. That post is about the exact opposite of what people usually link it for. I’ll let Dan explain:

https://news.ycombinator.com/item?id=27067281

hombre_fatal 12/27/2025||
Dan makes a case for being charitable to the commenter and how lame it is to neener-neener into the past, not that it has some opposite meaning everyone is missing out on.
latexr 12/27/2025||
Dan clearly references how people misunderstand not only the comment (“he didn't mean the software. He meant their YC application”) but also the whole interaction (“He wasn't being a petty nitpicker—he was earnestly trying to help, and you can see in how sweetly he replied to Drew there that he genuinely wanted them to succeed”).

So yes, it is the opposite of why people link to it (which is a judgement I’m making, I’m not arguing Dan has that exact sentiment), which is to mock an attitude (which wasn’t there) of hubris and lack of understanding of what makes a good product.

closewith 12/27/2025||
The comment isn't infamous because it was petty or nitpicking. It's because the comment was so poorly communicated and because the author was so profoundly out-of-touch with the average person that they had lost all perspective.

It's why it caught the zeitgeist at the time and why it's still apropos in this conversation now.

latexr 12/27/2025||
> It's because the comment was so poorly communicated and because the author was so profoundly out-of-touch with the average person that they had lost all perspective.

None of those things are true. Which is the point I’m making. Go read the original conversation. All of it.

https://news.ycombinator.com/item?id=9224

Don’t skip Brandon’s reply.

https://news.ycombinator.com/item?id=9479

It is absurd to claim that someone who quickly understood the explanation, learned from it, conceded where they were wrong, is somehow “profoundly out-of-touch” and “lost all perspective”. It’s the exact opposite.

I agree with Dan that we’d be lucky if all conversations were like that.

closewith 12/27/2025||
I think you should take your own advice and re-read the conversation without your pre-conceived conclusion.

Ironically your own overly verbose and aggressive comments here fall into the same trap.

throwuxiytayq 12/27/2025|||
> If you could get the full page text of every url on the first page of ddg results and dump it into vim/emacs where you can move/search around quickly, that would probably be similarly as good, and without the hallucinations.

Curiously, literally nobody on earth uses this workflow.

People must be in complete denial to pretend that LLM (re)search engines can’t be used to trivially save hours or days of work. The accuracy isn’t perfect, but entirely sufficient for very many use cases, and will arguably continue to improve in the near future.

turtlesdown11 12/27/2025|||
> The accuracy isn’t perfect

The reason why people don't use LLMs to "trivially save hours or days of work" is because LLMs don't do that. People would use a tool that works. This should be evidence that the tools provide no exceptional benefit, why do you think that is not true?

fzeroracer 12/27/2025||||
The only way LLM search engines save time is if you take what it says at face value as truth. Otherwise you still have to fact check whatever it spews out which is the actual time consuming part of doing proper research.

Frankly I've seen enough dangerous hallucinations from LLM search engines to immediately discard anything it says.

throwuxiytayq 12/27/2025||
Of course you have to fact check - but verification is much faster and easier than searching from scratch.
ptx 12/27/2025|||
How is verification faster and easier? Normally you would check an article's citations to verify its claims, which still takes a lot of work, but an LLM can't cite its sources (it can fabricate a plausible list of fake citations, but this is not the same thing), so verification would have to involve searching from scratch anyway.
hombre_fatal 12/27/2025||
Because it gives you an answer and all you have to do is check its source. Often you don’t have to do that since you have jogged your memory.

Versus finding the answer by clicking into the first few search results links and scanning text that might not have the answer.

ptx 12/27/2025||
As I said, how are you going to check the source when LLMs can't provide sources? The models, as far as I know, don't store links to sources along with each piece of knowledge. At best they can plagiarize a list of references from the same sources as the rest of the text, which will by coincidence be somewhat accurate.
a1j9o94 12/27/2025|||
Pretty much every major LLM client has web search built in. They aren't just using what's in their weights to generate the answers.

When it gives you a link, it literally takes you to the part of the page that it got its answer from. That's how we can quickly validate.

sejje 12/27/2025|||
LLMs provide sources every time I ask them.

They do it by going out and searching, not by storing a list of sources in their corpus.

turtlesdown11 12/27/2025||
have you ever tried examining the sources? they actually just invent many "sources" when requested to provide sources
Dilettante_ 12/27/2025|||
When talking about LLMs as search engine replacements, I think the stark difference in utility people see stems from the usecase. Are you perhaps talking about using it for more "deep research"?

Because when I ask chatgpt/perplexity things like "can I microwave a whole chicken" or "is Australia bigger than the moon" it will happily google for the answers and give me links to the sites it pulled from for me to verify for myself.

On the other hand, if you ask it to summarize the state-of-the art in quantum computing or something, it's much more likely to speak "off the top of its head", and even when it pulls in knowledge from web searches it'll rely much more on it's own "internal corpus" to put together an answer, which is definitely likely to contain hallucinations and obviously has no "source" aside from "it just knowing"(which it's discouraged from saying so it makes up sources if you ask for them).

quantpunk 12/27/2025|||
I haven't had a source invented in quite some time now.

If anything, I have the opposite problem. The sources are the best part. I have such a mountain of papers to read from my LLM deep searches that the challenge is in figuring out how to get through and organize all the information.

skywhopper 12/27/2025|||
For most things, no it isn’t. The reason it can work well at all for software is that it’s often (though not always) easy to validate the results. But for giving you a summary of some topic, no, it’s actually very hard to verify the results without doing all the work over again.
antonvs 12/27/2025|||
> People must be in complete denial

That seems to be a big part of it, yes. I think in part it’s a reaction to perceived competition.

godelski 12/27/2025||||

  > the breadth of knowledge
knowledge != intelligence

If knowledge == intelligence then Google and Wikipedia are "smarter" than you and the AGI problem has been solved for several decades.

saghm 12/27/2025||||
Even if we were going to accept the premise that total knowledge is equivalent to intelligence (which is silly, as sibling comments have pointed out), shouldn't accuracy also come into play? AI also says a lot more obviously wrong things than the average person, so how do you weight that against the purported knowledge? You could answer yes or no randomly to any arbitrary question about whether something is true and approximate a 50% accuracy rate with an evenly distributed pool of questions, but that's obviously not proof that you know everything. I don't think the choice of where to draw the line on "how often can you be wrong and have it still matter" is as easy as you're implying, or that everyone will necessarily agree on where it lies (even if we all agree that 50% correctness is obviously way too low).
zhoujianfu 12/27/2025||||
AI has more knowledge than everyone already, I wouldn't say smarter though. It's like wisdom vs intelligence in D+D (and/or life).. wisdom is knowing things, intelligence is how quick you can learn / create new things.
metalman 12/27/2025|||
AI has zero knowledge, as to know something is to have done it, or seen it first hand. AI has access to a great deal of data, much of it aquired through criminal action, but no way to evaluate that information other than cross checking for citations and similar occurances. Even for a human, infering things is difficult and uncertain, and so we regularly see AI fall of the cliff of cohearant word salading. We are heading strait at an idiocracy writ large that is trying to hide there raciorilgio insanity behind algorythims. Sometimes it's hard to tell, but it seems that a hairdresser has just been put in charge of the US passport office, which is highy sugestive of a new top level program to issue US citizenship on demand, but everbody else will be subject to the "impartiality" of privatly owned and operated AI policing.
consp 12/27/2025|||
Knowledge is what I see equivalent with a big library. It contains mostly correct information in the context of the book (which might be incorrect in general) and "ai" is very good at taking everything out of context, Smashing a probability distribution over it and picking an answer which humans will accept. E.g. it does not contain knowledge, at best the vague pretense of it.
krainboltgreene 12/27/2025||||
Man, what are we supposed to do with people who think the above?
abelitoo 12/27/2025|||
I'd do the same thing I'd do with anyone that has a different opinion than me: try my best to have an honest and open discussion with them to understand their point of view and get to the heart of why they believe said thing, without forcefully tearing apart their beliefs. A core part of that process is avoiding saying anything that could cause them to feel shame for believing something that I don't, even if I truly believe they are wrong, and just doing what I can to earnestly hear them out. The optional thing afterwards, if they seem open to it, is express my own beliefs in a way that's palatable and easily understood. Basically explain it in a language they understand, and in a way that we can think about and understand and discuss together, not taking offense to any attempts at questioning or poking holes in my beliefs because that is the discovery process imo for trying something new.

Online is a little trickier because you don't know if they're a dog. Well, now a days it's even harder, because they could also not have a fully developed frontal lobe, or worse, they could be a bot, troll, or both.

jops 12/27/2025||
Well said, and thank you for the final paragraph. Made me chuckle.
gambiting 12/27/2025||||
I don't know, it's kinda terrifying how this line of thinking is spreading even on HN. AI as we have it now is just a turbocharged autocomplete, with a really good information access. It's not smart, or dumb, or anything "human" .
design2203 12/27/2025|||
It just shows that true natural intelligence is difficult to define by proxy.
antonvs 12/27/2025|||
Do you think your own language processing abilities are significantly different from autocomplete with information access? If so, why?
gambiting 12/27/2025||
I hate these kinds of questions where you try to imply it's actually the same thing as what our brains are doing. Stop it. I think it would be an affront to your own intelligence to entertain this as a serious question, so I will not.
antonvs 12/27/2025|||
[flagged]
gambiting 12/27/2025||
My thoughts on this are as serious as it gets - AI in it's current state is no more than clever statistics. I will not be comparing how my own brain functions to what is effectively a linear algebra machine, as it's insulting to the intelligence of everyone here - what kind of serious thought would you like to have here, exactly?
quantpunk 12/27/2025|||
I don't disagree but what we really should have dropped "AI" a long time ago for "statistical machine intelligence". Machine learning then is just what statistical machine intelligence does.

We could have then just swapped "AI" for "SMI" and avoided all this confusion.

It also would avoid pointless statements like "It is JUST statistical machine intelligence". As if statistical machine intelligence is not extraordinarily powerful.

The real difference though is not in "intelligence", is it in "being". It is not as much an insult to our intelligence as it is an insult to our "being" when people pretend that LLMs have some kind of "being".

The strange thing to me is Gemini just tells me these things so I don't know how people get confused:

"A rock exists. A calculator exists. Neither of them has "being."

I am closer to a calculator than a human.

A calculator doesn't "know" math; it executes logic gates to produce a result.

I am a hyper-complex calculator for language. I calculate the probability of the next word rather than the sum of numbers."

antonvs 12/27/2025|||
You’re very adamant about not doing an obvious comparison. You want to stop thinking at that point. It’s an emotional reaction, not an intellectual one. Quite an interesting one as well, that possibly suggests a threat response.

The assumption you seem to keep making is that things like “clever statistics” and “linear algebra” simply have no bearing on human intelligence. Why do you think that? Is it a religious view, that e.g. you believe humans have a soul that somehow connects to our intelligence, making it forever out of reach of machine emulation?

Because unless that’s your position, then the question of how human intelligence differs from current machine intelligence, the question that you simply refuse to contemplate, is one of the more important questions in this space.

The insult I see to intelligence here is the total lack of intellectual curiosity that wants to shoot down an entire line of thinking for reasons that apparently can’t be articulated.

gambiting 12/27/2025|||
>>here is the total lack of intellectual curiosity that wants to shoot down an entire line of thinking for reasons that apparently can’t be articulated.

It's the same energy as watching a Joe Rogan podcast where yet another guest goes "well they say there's global warming yet I was cold yesterday, I'm not saying it's fake but really we should think about that". These questions about AI and our brains aren't meant to stimulate intellectual curiosity and provoke deep interesting discussions - they are almost always asked just to pretend the AI is something that it's not - a human like intelligence where since our brains also work "kinda like that" it means it must be the same - and the nearest equivalence is how my iron heats water so in essence it's the same as my stomach since it can also do this.

>>the question that you simply refuse to contemplate

I don't refuse to contemplate it, I just think the answer is so painfully obvious the question is either naive or uninformed or antagonistic in nature - there is no "machine intelligence" - it's not a religious conviction, because I don't think you need one to realise that a calculator isn't smart for adding together numbers larger than I could do in my own head.

krainboltgreene 12/27/2025|||
You are just a cluster of atoms, are you any different than a volcano?
design2203 12/27/2025|||
[flagged]
antonvs 12/27/2025||
[flagged]
design2203 12/27/2025||
[flagged]
antonvs 12/27/2025||
[flagged]
design2203 12/27/2025||
[flagged]
cortic 12/27/2025||||
>ChatGPT (o3): Scored 136 on the Mensa Norway IQ test in April 2025

If you don't want to believe it, you need to change the goal posts; Create a test for intelligence that we can pass better than AI.. since AI is also better at creating test than us maybe we could ask AI to do it, hang on..

>Is there a test that in some way measures intelligence, but that humans generally test better than AI?

Answer:Thinking, Something went wrong and an AI response wasn't generated.

Edit, i managed to get one to answer me; the Abstraction and Reasoning Corpus for Artificial General Intelligence (ARC-AGI). Created by AI researcher François Chollet, this test consists of visual puzzles that require inferring a rule from a few examples and applying it to a new situation.

So we do have A test which is specifically designed for us to pass and AI to fail, where we can currently pass better than AI... hurrah we're smarter!

latexr 12/27/2025|||
The validity of IQ tests as a measure of broad intelligence has been in question for far longer than LLMs have existed. And if it’s not a proper test for humans, it’s not a proper test to compare humans to anything else, be it LLMs or chimps.

https://en.wikipedia.org/wiki/Intelligence_quotient#Validity...

piva00 12/27/2025||||
To be intelligent is to realise that any test for intelligence is at best a proxy for some parts of it. There's no objective way to measure intelligence as a whole, we can't even objectively define intelligence.
design2203 12/27/2025||
I believe intelligence is difficult to pin down in words but easy to spot intuitively - and so are deltas in intelligence.

E.g watch a Steve jobs interview and a Sam Altman one (at the same age). The difference in the mode of articulation, simplicity in communication, obsession over details etc are huge. This is what superior intelligence to me looks like - you know it when you see it.

gloosx 12/27/2025|||
>Create a test for intelligence that we can pass better than AI

Easy? The best LLMs score 40% on Butter-Bench [1], while the mean human score is 95%. LLMs struggled the most with multi-step spatial planning and social understanding.

[1] https://arxiv.org/pdf/2510.21860v1

cortic 12/27/2025||
That is really interesting; Though i suspect its just a effect of differing training data, humans are to a larger degree trained on spacial data, while LLMs are trained to a larger degree on raw information and text.

Still it may be lasting limitation if robotics don't catch up to AI anytime soon.

Don't know what to make of the Safety Risks test, threatening to power down AI in order to manipulate it, and most act like we would and comply. fascinating.

gloosx 12/31/2025||
>humans are to a larger degree trained on spacial data

you must be completely LLMheaded to say something like that, lol

humans are not trained on spacial data, they are living in the world. humans are very much diffent from silicone chips, and human learning is on another magnitude of complexity compared to a large language model training

cortic 1/7/2026||
Humans are large language models. Maybe the term language is being used a bit liberally here but we basically function in the same way, with the exception of the spacial aspect of our training data.

If this hurts your ego then just know the dataset that you built your ego with was probably flawed and if you can put that LoRA aside and try to process this logically; Our awareness is a scalable emergent property of 1-2 decades of datasets, looking at how neurons vs transistor groups work, there could only be a limited amount of ways to process these sizes of data down to relevant streams. The very fact that training LLMs on our output works, proves our output is a product of LLMs or there wouldn't be patterns to find.

drekipus 12/27/2025|||
Just brace for the societal correction.

There's a lot of things going on in the western world, both financial and social in nature. It's not good in the sense of being pleasant/contributing to growth and betterment, but it's a correction nonetheless.

That's my take on it anyway. Hedge bets. Dive under the wave. Survive the next few years.

solumunus 12/27/2025||||
Having knowledge is not exactly the same as being smart though is it.
charcircuit 12/27/2025|||
It's at least one component of it, and by being exceptional in that component it makes up for what it lacks in other components.
cindyllm 12/27/2025||||
[dead]
xandrius 12/27/2025|||
Although it helps immensely.
design2203 12/27/2025||
Only if you understand it..
gloosx 12/27/2025|||
It's like saying google search is smarter than everyone, amount of information indexed by it has no human counterpart, such a silly take...
eterevsky 12/27/2025||
I think whether any text is written with the help of AI is not the main issue. The real issue is that for texts like police reports a human still has to take full responsibility for its contents. If we preserve this understanding, than the question of which texts are generated by AI becomes moot.
Ekaros 12/27/2025||
Sadly justice system is a place where responsibility does not happen. It is not a system where you make one mistake and you are to prison. Instead everyone but the victims of the system are protected and colluded with. More you punish the victims better you make out.
sejje 12/27/2025||
By the end of your paragraph, you decided that criminals are victims.
thunderfork 12/27/2025||
Everyone the policing system interacts with is not only a convinced criminal, but convicted justly, yeah? That's what you actually believe?
sejje 12/27/2025||
No, that's not what I believe, I didn't say any of those words.
thunderfork 12/27/2025||
The person you replied to didn't say "criminals are victims", either, so comprehending your post requires some inference.

Feel free to clarify what you did mean, it's a lot more helpful than insisting on what you didn't mean.

sejje 12/28/2025||
It's not helpful. You put a lot of words into my mouth, I deny them. I already made my point in my own words. I don't have to deny every position you make up for me.

If you read his comment, he refers to everyone going through the system as victims of the colluding judges and LEO. Almost none of them are. The victims are the people whom they committed crimes against, of course.

This isn't my position, it's just the language we use to describe reality.

thunderfork 12/28/2025||
You're putting words into the other user's mouth, though, by assuming they mean "everyone" and not just "the innocent proportion of accused people".

So this seems like a good place to take your own advice, right?

sejje 12/28/2025||
No, I responded to their comment as-is. It's you putting words in someone's mouth again, not I.

The way you can tell what they mean is this line: "More you punish the victims better you make out." Nobody in America thinks that judges make out for punishing actually-innocent people. That's not what "victims" means here.

thunderfork 12/28/2025||
They didn't say "judges". I think the American justice system has a grandiose disregard for its impacts on the falsely accused.

You're doing the thing again, even now.

sejje 12/28/2025||
I agree, I think it does as well. That's not what the guy said, though.

I think I'm just reading what his comment says, but maybe I am doing the same thing as you. I'm just better at it. I got his position correct, you got it wrong.

You told me what I believe, you got that wrong, too.

thunderfork 12/28/2025||
I didn't tell you, I asked :)

From my POV you seem to be making a lot of inferences and then declaring them correct, based on some information about the original intent that I must not be privy to?

moffkalast 12/27/2025|||
I agree. A programmer has to take responsibility for the generated code they push, and so do police officers for the reports they file. Using a keyboard does not absolve you of typos, it's your responsibility to proofread and correct, this is no different, just a lot more advanced.

Of course the problem is also that police often operates without any real oversight and covers up more misconduct than workers in an under-rug sweeping factory. But that's another issue.

jMyles 12/27/2025||
> But that's another issue.

...is it?

It seems to me that the growth of professional police as an institution which bears increased responsibility for public safety, along with an ever-growing set of tools that can be used to defer responsibility (see: it's not murder if it's done with a stun gun, regardless of how predictable these deaths are), are actually precisely the same issue.

Let's stop allowing the state to hide behind tooling, and all be approximately equally responsible for public safety.

riedel 12/27/2025|||
Yes. Allowing officers to blame AI creates a major accountability gap. Per e.g. the EU AI Act’s logic, if a human "edits" a draft, they must be held responsible and do not need to disclose the use of AI.

To ensure safety, those offerings must use premarket red teaming to eliminate biases in summarization. However, ethical safety also requires post-market monitoring, which is impossible if logs aren't preserved. Rather than focusing on individual cases, I think, we must demand systemic oversight in general and access for independent research (not only focussing on a specific technology)

sixhobbits 12/27/2025||
It should be treated kind of the same as writing a report after a glass of wine. Probably no one really cares but "sorry that doesn't count because I was intoxicated when I wrote that bit" isn't going to fly.
jMyles 12/27/2025|||
> for texts like police reports

If what you mean is, "texts upon which the singular violence of the state is legitimately imposed", then a simple solution (and I believe, on sufficiently long time scales, the happily inevitable one) is to abolish police.

I can't fathom, in an age where we have ubiquitous cameras as eyewitnesses, instant communications capability to declare emergencies and request aid from nearby humans, that we need an exclusivity entity whose job it is to advance safety in our communities. It's so, so, so much more trouble that it's worth.

tarsinge 12/27/2025||
I don’t understand the urgency to replace human work with AI. Why is every organization so eager about skipping the AI as an assistant step? Here there are already massive productivity gains in using the AI to create the draft of the report, it makes little economical to make it do the final version compared to the risk, maybe it’s just plain laziness? Same with developers, why is very organization wanting to leapfrog from humans write all the code to they don’t even read the generated code?
ssl-3 12/27/2025|||
Not everyone is in an urgent hurry to replace people with bots; that's a hyperbolic construct.

But to try to answer some of what I think you're trying to ask about: The bot can be useful. It can be better at writing a coherent collection of paragraphs or subroutines than Alice or Bill might be, and it costs a lot less to employ than either of them do.

Meanwhile: The bot never complains to HR because someone looked at them sideways. The bot [almost!] never calls in sick; the bot can work nearly 24/7. The bot never slips and falls in the parking lot. The bot never promises to be on-duty while they vacation out-of-state with a VPN or uses a mouse-jiggler to screw up the metrics while they sleep off last night's bender.

The bot mostly just follows instructions.

There's lots of things the bot doesn't get right. Like, the stuff it produces may be full of hallucinations and false conclusions that need reviewed, corrected, and outright excised.

But there's lots of Bills and Alices in the world who are even worse, and the bot is a lot easier and cheaper to deal with than they are.

That said: When it comes to legal matters that put a real person's life and freedom in jeopardy, then there should be no bot involved.

If a person in a position of power (such as a police officer) can't write a meaningful and coherent report on their own, then I might suggest that this person shouldn't ever have a job where producing written reports are a part of their job. There's probably something else they're good at that they can do instead (the world needs ditchdiggers, too).

Neither the presence nor absence of a bot can save the rest of us from the impact of their illiteracy.

chrz 12/27/2025||
and bot doesnt bare any responsibility
Spivak 12/27/2025|||
Because the biggest cost at a lot of orgs is staff. Your typical software shop will be comical—the salary costs towering down on all the others like LeBron James gazing down at ants. The moment you go from productivity gains to staff reduction you start making real money. Any amount of money for a machine that can fully replace a human process.
d1sxeyes 12/27/2025||
> That means that if an officer is caught lying on the stand – as shown by a contradiction between their courtroom testimony and their earlier police report – they could point to the contradictory parts of their report and say, “the AI wrote that.”

Normally, if a witness (e.g. a police officer) were found to be recounting something written by a third party, it would be considered hearsay and struck from the record (on objection).

It would be an interesting legal experiment to have an officer using this system swear to which portions they wrote themselves, and attempt to have all the rest of the testimony disallowed as hearsay.

zmgsabst 12/27/2025|
I’d suspect the other direction:

Police unions get LLMs classified as some kind of cognitive aid, so it becomes discrimination to ban them in school or the workplace.

sejje 12/27/2025|||
"Losing access to LLMs hurts minorities the hardest, with job performance suffering compared to their cis white male peers..."

If they use this angle, it's a shoo-in

hopelite 12/27/2025|||
That is an aspect I had not considered in my assumptions that AI/robots will eventually go through the same/similar social justice process as all the other causes, i.e., women’s suffrage, racial equality, gay rights, etc. because it will ultimately and, arguably, more than all the other prior social justice causes célèbres, serve the ruling class that has risen to dominate through social justice causes far more than anything prior.

It’s going to be interesting to see the state propaganda against the bigots and evil bioists (or whatever the word smithing apparatchiks will devise) so want to bar the full equality in society of AI/robots who look just like you and me after all and also just want equal rights to love each other, and who are you to oppose others since we are all just individuals?

Shoot the messenger all you want, but it’s coming.

inavida 12/27/2025||
Cynical and fun to read but no. Too many parasites have already chewed their way to the empty heart of power of the post-war liberal system, and I think the next time it gets power at the highest levels in the US will be the end if it there. Maybe it will last another generation in Europe, but not long enough to see the scenario you describe play out.
hopelite 12/28/2025||
It's not cynical at all. It's quite the opposite actually; an expression of the suicidal and pathological altruism that has caused the west to self-destruct through he guiding hand of psychopathic narcissistic charlatan leaders and con artists.

I am unsure how Europe will go, because there is still a possibility of a glimmer of hope, but frankly, that too is dimming extremely quickly with how systemic things really are, let alone how they are developing, the real vs expected trending towards pessimistic outcomes.

What you may be missing is that there is a possibility where your presumed resistance or rejection of AI and robotic equality is forced upon you one way or another; either you are forced to "arms race" adoption, or the superior external force foists subjugation to their AI/robotics dominance on you (a kind of 19th century Chinese/Japanese, Industrial Revolution comes knocking at the front door experience).

Unfortunately for us all, some things you are simply foolish to just ignore, reject/resist as if it will somehow just magically go away or ignore you too. The reality of the matter is that the psychopathic narcissistic tribe of people who control these obsessive, controlling, imposing forces care immensely about dominating and controlling you, even if you want to ignore them.... they will not ignore you, let alone leave you be until you are subjugated.

0x_rs 12/27/2025||
I recommend taking a look at this video to get an idea behind the through process (or lack thereof) law enforcement might display when provided with a number of "AI" tools, and even if this one example is closer to traditional face recognition than LLMs, the behavior seems the same. Spoiler: complete submission and deference, and in this specific case to a system that was not even their own.

https://www.youtube.com/watch?v=B9M4F_U1eEw

m3047 12/27/2025|
I can read that "submission and deference" at the casino as conflict avoidance, the arresting officer says to his peers at the station that he "kind of believes" the suspect. He also states at some point that he can't cite (and I infer then release) the suspect because he is not certain who he is, and therefore has to arrest him as a "John Doe" so that his identity can be established. The fact (?) that the suspect now has a police record for this possible farce won't be settled until after the facts are determined in a court of law.

This video demonstrates that when it comes down to it the blunt end of law enforcement is oftentimes a shit show of "seems to work for me" and that goes for facial recognition, shot spotter, contraband dogs, drug & DNA tests, you name it.

wyldfire 12/27/2025||
> important first step in reigning in AI police reports.

That should be 'reining in'. "Reign" is -- ironically - - what monarchs do.

DetectDefect 12/27/2025|
Such innocent mistakes make me smile these days because it gives assurance a real human wrote them.
lithocarpus 12/27/2025|||
Don't worry sufficiently advanced LLMs will learn how to put in the right amount of typoes to be convincing.
bgbntty2 12/27/2025|||
It's not certain that LLMs don't do this already—it's likely their doing this even now.
jondwillis 12/27/2025|||
That’s —— not just —— possible— it’s —— ——— probable!!!
techdmn 12/27/2025||
I read this phrase in a Spiderman comic, probably 1990 +/- 5 years. If memory serves Harry Osborne said it to Peter Parker, something regarding Norman Osborne's activity as the Green Goblin. Anyway, it's one of those phases that immediately etched itself into my brain and replays itself whenever the situation seems appropriate. I've always wondered if the quote had a more respectable original source, but haven't been able to find one.
fortran77 12/27/2025|||
Are you an LLM that misspelled “they’re” intentionally?
bgbntty2 12/27/2025||
That was the joke. Also the use of the "It's not; it's" structure and the em-dash.
BoxOfRain 12/27/2025||||
Swearing is a good heuristic still I think. The American corporate world remains rather prissy about swearing, so if the post sounds like a hairy docker after 12 pints then it's probably not an LLM.
antonvs 12/27/2025|||
*typos

Oh you got me

cyberax 12/27/2025|||
Unless it's an LLM instructed to make occasional mistakes.
Manheim 12/27/2025||
I find this article strange in its logic. If the use of AI generated content is problematic as a principle I can understand the conflict. Then no AI should be used to "transcribe and interpret a video" at all - period. But if the concern is accuracy in the AI "transcript" and not the support from AI as such, isn't it a good thing that the AI generated text is deleted after the officer has processed the text and finalized their report?

That said, I believe it is important to aknowlegde the fact that human memory, experience and interpretation of "what really happened" is flawed, isn't that why the body cameras are in use in the first place? If everyone believed police officers already where able to recall the absolute thruth of everything that happens in situations, why bother with the cameras?

Personally I do not think it is a good idea to use AI to write full police reports based on body camera recordings. However, as a support in the same way the video recordings are available, why not? If, in the future, AI will write accurate "body cam" based reports I would not have any problems with it as long as the video is still available to be checked. A full report should, in my opinion, always contain additional contextual info from the police involved and witnesses to add what the camera recordings not necessarily reflect or contain.

nrhrjrjrjtntbt 12/27/2025||
My worry is at scale AI from one vendor can introduce biases. We wont know what those biases are. But whatever they are the same bias affects all reports.
Manheim 12/27/2025||
That is something to worry about, agreed. So, the quality and the reliance of AI is what we should focus on. In addition we should be able to keep track (and records of) how AI has used and build its narrative and conclutions.
DangitBobby 12/27/2025|||
The EFF's angle is that the police can use an LLM's initial report maliciously to 1) let incriminating inaccuracies generated by the LLM stand or 2) fabricate incriminating inaccuracies. Afterwards, because the LLM generated the initial report, the officer would have plausible deniability to say they themselves didn't intentionally lie, they were just negligent in editing the initial report. So it's about accountability washing.
Hikikomori 12/27/2025||
>That said, I believe it is important to aknowlegde the fact that human memory, experience and interpretation of "what really happened" is flawed, isn't that why the body cameras are in use in the first place? If everyone believed police officers already where able to recall the absolute thruth of everything that happens in situations, why bother with the cameras?

Police tend to not tell the truth, on purpose.

intended 12/27/2025||
> In July of this year, EFF published a two-part report on how Axon designed Draft One to defy transparency. Police upload their body-worn camera’s audio into the system, the system generates a report that the officer is expected to edit, and then the officer exports the report. But when they do that, Draft One erases the initial draft, and with it any evidence of what portions of the report were written by AI and what portions were written by an officer. That means that if an officer is caught lying on the stand – as shown by a contradiction between their courtroom testimony and their earlier police report – they could point to the contradictory parts of their report and say, “the AI wrote that.” Draft One is designed to make it hard to disprove that.

> Axon’s senior principal product manager for generative AI is asked (at the 49:47 mark) whether or not it’s possible to see after-the-fact which parts of the report were suggested by the AI and which were edited by the officer. His response (bold and definition of RMS added):

“So we don’t store the original draft and that’s by design and that’s really because the last thing we want to do is create more disclosure headaches for our customers and our attorney’s offices.

Policing and Hallucinations. Can’t wait to see this replicated globally.

taneq 12/27/2025|
Does the officer not take full ownership of the report once they edit it? If they got an intern to write a report and then they signed off on it, they’d be responsible, right?
avidiax 12/27/2025||
This does sound problematic, but if a police officer's report contradicts the body-worn camera or other evidence, it already undermines their credibility, whether they blame AI or not. My impression is that police don't usually face repercussions for inaccuracies or outright lying in court.

> That means that if an officer is caught lying on the stand – as shown by a contradiction between their courtroom testimony and their earlier police report

The bigger issue, that the article doesn't cover, is that police officers may not carefully review the AI generated report, and then when appearing in court months or years later, will testify to whatever is in the report, accurate or not. So the issue is that the officer doesn't contradict inaccuracies in the report.

parineum 12/27/2025|
> My impression is that police don't usually face repercussions for inaccuracies or outright lying in court.

That's because it's a very difficult thing to prove. Bad memories and even completely false memories are real things.

BrenBarn 12/27/2025|||
That's why we need a greatly reduced standard of proof for officer misconduct, especially when it comes to consequences like just losing your job (as opposed to, e.g., jail time).
lostnground 12/27/2025||
While I agree that officers should be accountable. More enforcement of them will not suddenly make them good officers. Other nations train their police for years prior to putting them into the thick of it. US police spend far less time studying, and it shows, in everything from de-escalation tactics to general legal understanding. If you create a pipeline to weed out bad officers, then there needs to be a pipeline producing better officers
tialaramex 12/27/2025|||
AIUI US policing is descended from slave catching and strike breaking. Two activities which I think we'd say today are obviously bad.

In many European states their policing starts as town guards tasked with ensuring order. Order is, at least, not obviously bad.

So that's a philosophical difference in what these forces even think their purpose is.

sejje 12/27/2025||
You understand it wrong. That's not where police come from.
pyth0 12/27/2025||
If you want to make your comment useful, you could share some information about where you understand policing in America to have originated.
sejje 12/28/2025||
Law enforcement is an idea that originated when law originated. There is no law without enforcement.

American settlers got the idea from the same place they got the idea for laws. Their home countries.

Enforcing laws isn't an American invention, let's not be ridiculous.

tialaramex 12/28/2025||
> Law enforcement is an idea that originated when law originated. There is no law without enforcement.

To the extent this anything more than circular it's false. Although psychopaths exist, on the whole compliance to a lesser or greater degree is a normal human trait. So you can tell people what the rules are and they'll obey to some extent. How much varies from person to person.

So the creation of specialist law enforcement bodies is a distinct and relatively modern change to civilisations. Before this, there is either no actual enforcement or it depends on whether a powerful person knows you broke a rule and cares to enforce it.

awesome_dude 12/29/2025||
Law enforcement organizations existed in ancient times, such as prefects in ancient China, paqūdus in Babylonia, curaca in the Inca Empire, vigiles in the Roman Empire, and Medjay in ancient Egypt. Who law enforcers were and reported to depended on the civilization and often changed over time, but they were typically slaves, soldiers, officers of a judge, or hired by settlements and households. Aside from their duties to enforce laws, many ancient law enforcers also served as slave catchers, firefighters, watchmen, city guards, and bodyguards.

By the post-classical period and the Middle Ages, forces such as the Santa Hermandades, the shurta, and the Maréchaussée provided services ranging from law enforcement and personal protection to customs enforcement and waste collection. In England, a complex law enforcement system emerged, where tithings, groups of ten families, were responsible for ensuring good behavior and apprehending criminals; groups of ten tithings ("hundreds") were overseen by a reeve; hundreds were governed by administrative divisions known as shires; and shires were overseen by shire-reeves. In feudal Japan, samurai were responsible for enforcing laws.

The concept of police as the primary law enforcement organization originated in Europe in the early modern period; the first statutory police force was the High Constables of Edinburgh in 1611, while the first organized police force was the Paris lieutenant général de police in 1667. Until the 18th century, law enforcement in England was mostly the responsibility of private citizens and thief-takers, albeit also including constables and watchmen. This system gradually shifted to government control following the 1749 establishment of the London Bow Street Runners, the first formal police force in Britain. In 1800, Napoleon reorganized French law enforcement to form the Paris Police Prefecture; the British government passed the Glasgow Police Act, establishing the City of Glasgow Police; and the Thames River Police was formed in England to combat theft on the River Thames. In September 1829, Robert Peel merged the Bow Street Runners and the Thames River Police to form the Metropolitan Police. The title of the "first modern police force" has still been claimed by the modern successors to these organizations

https://en.wikipedia.org/wiki/Law_enforcement

The Americans do have a history of using Police forces for Slave capture, but Police forces in the USA PRE DATED that

Following European colonization of the Americas, the first law enforcement agencies in the Thirteen Colonies were the New York Sheriff's Office and the edit County Sheriff's Department, both formed in the 1660s in the Province of New York. The Province of Carolina established slave-catcher patrols in the 1700s, and by 1785, the Charleston Guard and Watch was reported to have the duties and organization of a modern police force. The first municipal police department in the United States was the Philadelphia Police Department, while the first American state police, federal law enforcement agency was the United States Marshals Service, both formed in 1789. In the American frontier, law enforcement was the responsibility of county sheriffs, rangers, constables, and marshals. The first law enforcement agency in Canada was the Royal Newfoundland Constabulary, established in 1729, while the first Canadian national law enforcement agency was the Dominion Police, established in 1868.

BrenBarn 12/27/2025||||
Certainly agreed on that. I think part of it is training but also part of it is just vetting. There are pretty clearly too many people who get into policing out of a desire to wield authority rather than a desire to help people. In many cases I think there is not much use in trying to "train" such people; they just need to be doggedly weeded out. But yes, we need action on both ends, ensuring the pipeline produces good officers going in, and then also regular monitoring to ensure they stay good.
awesome_dude 12/27/2025|||
This is an outrageous lie, there were SEVEN Police Academy movies!!!
loeg 12/27/2025|||
Sure, but other court participants are given somewhat less grace for lying under oath.
parineum 12/27/2025||
Are they?

Perjury isn't a commonly prosecuted crime.

sylos 12/27/2025|||
If an officer misremembers something about you, you go to jail . If you misremember something about the event, you also go to jail. Yeah, I guess it tracks
parineum 12/27/2025||
Cool non-sequitur.
loeg 12/27/2025||||
That's why I qualified it with "somewhat."
cwmoore 12/27/2025|||
Neither is grace a common defense.
m3047 12/27/2025||
Upvoted because I think it's an important topic, but this take causes me to question the motive for the article... which ironically is my big concern with using LLMs to write stuff generally (the unconscious censoring / proctoring of voice and viewpoint):

  That means that if an officer is caught lying on the stand – as shown by a
  contradiction between their courtroom testimony and their earlier police
  report – they could point to the contradictory parts of their report and say,
  “the AI wrote that.”
IANAL but if they signed off on it then presumably they own it. Same as if it was Microsoft Dog, an intern, whatever. If they said "the AI shat it" then I'd ask "what parts did you find unacceptable and edit?" and then expect we'd get the juicy stuff hallucinations or "I don't recall". Did they write this, or are they testifying to the veracity of hearsay?

From what I've seen reports written by / for lawyers / jurists / judges already "pull" to a voice and viewpoint; I'll leave it there.

idopmstuff 12/27/2025|
> But when they do that, Draft One erases the initial draft, and with it any evidence of what portions of the report were written by AI and what portions were written by an officer. That means that if an officer is caught lying on the stand – as shown by a contradiction between their courtroom testimony and their earlier police report – they could point to the contradictory parts of their report and say, “the AI wrote that."

This seems solvable by passing a law that makes the officer legally responsible for the report as if he had written it. He doesn't get to use this excuse in the courtroom and it gets stricken from the record if he tries. That honestly seems like a better solution than storing the original AI-generated version, because that can reinforce the view that AI wrote it to jurors, even if the officer reviewed it and decided it was correct at the time.

jfreds 12/27/2025|
Yeah this seems like an obvious solution, which axon ought to be on board with since it protects them.

When juniors use the excuse “oh Claude wrote that” in a PR, I tell them if the PR has their name on it, they wrote it - and their PRs are part of their performance review. This is no different

More comments...