Top
Best
New

Posted by walterbell 4/3/2025

The slow collapse of critical thinking in OSINT due to AI(www.dutchosintguy.com)
446 points | 231 commentspage 4
Terr_ 4/4/2025|
> What Dies When Tradecraft Goes Passive?

Eventually, Brazil (1985) happens, to the detriment of Archibald [B]uttle, where everyone gives unquestionable trust to a flawed system.

ringeryless 4/4/2025||
Aka, i have no problem being explicitly anti AI as a bad idea to begin with. This is what I think, that it is a foolish project from the get go.

Techne is the Greek word for HAND.

black_puppydog 4/4/2025||
I'd argue that for a profession that has existed for quite some time, "since chatGPT appeared" isn't in any way "slow"
stuckkeys 4/4/2025||
The entire article has AI gen content into the mix. But I get it. Yes people are going to get obliterated if they only rely on AI for answers.
torginus 4/4/2025||
Most cybersecurity is just a smoke show anyways, presentation matters more than content. AI is just good at security theather as humans are.
rurban 4/5/2025||
No collapse with tests, as with every junior. Junior can try, but must pass the tests.

If you have no tests, you already collapsed

ingohelpinger 4/4/2025||
It's true, so often chatgpt has to apologize because it was wrong. lol
dambi0 4/4/2025|
Do you think humans are less likely to be wrong or just less likely to apologize when they are?
ingohelpinger 4/4/2025||
i think being wrong is fine, but being wrong intentionally is not very human, this is due to emotions, consciousnesses, pride etc. which ai does not have as of now, and this leads me to believe, it's just another religion which will be used to "make the world a better place" :D
Daub 4/4/2025||
Am I the only one to have to search for what OSINT was an acronym for?
axegon_ 4/4/2025||
OSINT is a symptom of it. When GPT-2 came along, I was worried that at some point the internet will get spammed with AI-crap. Boy, was I naive... I see this incredibly frequently and I get a ton of hate for saying this (including here on HN): LLMs and AI in general is a perfect demonstration of a shiny-new-toy. What people fail to acknowledge is that the so called "reasoning" is nothing more then predicting the most likely next token, which works reasonably well for basic one-off tasks. And I have used LLMs in that way - "give me the ISO 3166-1 of the following 20 countries:". That works. But as soon as you throw something more complex and start analyzing the results(which look reasonable at first glance), the picture becomes very different. "Oh just use RAGs, are you dumb?", I hear you say. Yeah?

class ParsedAddress(BaseModel):

    street: str | None

    postcode: str | None

    city: str | None

    province: str | None

    country_iso2: str | None

Response:

{

  "street": "Boulevard",

  "postcode": 12345,

  "city": "Cannot be accurately determined from the input",

  "province": "MY and NY are both possible in the provided address",

  "country_iso2": "US"
}

Sure, I can spend 2 days trying out different models and tweaking the prompts and see which one gets it, but I have 33 billion other addresses and a finite amount of time.

The issue occurs in OSINT as well: A well structured answer lures people into a mental trap. Anthropomorphism is something humans have fallen for since the dawn of mankind and is doing so yet again with AI. The thought that you have someone intelligent nearby with god-like abilities can be comforting but... Um... LLMs don't work like that.

roenxi 4/3/2025|
This article seems a bit weird because it doesn't talk about whether the quality of the analysis went up or down afterwards.

To pick an extreme example, programmers using a strongly typed language might not bother manually checking for potential type errors in their code and leave it to the type checker to catch them. If the type checker turns out to be buggy then their code may fail in production due to their sloppiness. However, we expect the code to eventually be free of type errors to a superhuman extent because they are using a tool that is strong to cover their personal weaknesses.

AI isn't as provably correct as type checkers, but they're pretty good at critical thinking (superhuman compared to the average HN argument) and human analysts must also routinely leave a trail of mistakes in their wake. The real question is what influence the AI has on the quality and I don't see why the assumption is that it is negative. It might well be; but the article doesn't seem to go into that in any depth.

More comments...