Posted by walterbell 1 day ago
Now that we have thinking models and methodology to train them, surely before long it will be possible to have a model that is very good at the kind of thinking that an expert OSINT analyst knows how to do.
There are so many low hanging fruit applications of existing LLM strengths that have simply not been added to the training yet, but will be at some point.
Besides "OSINT" has been busy posting scareware for years, even before "AI".
There's so much spam that you can't figure out what the real security issues are. Every other "security article" is about "an attacker" that "could" obtain access if you were sitting at your keyboard and they were holding a gun to your head.
Mere observation of others has shown me the decadence that results from even allowing such "tools" into my life at all.
(who or what is the tool being used?)
I have seen zero positive effects from the cynical application of such tools in any aspect of life. The narrative that we "all use them" is false.
But all the examples feel like people are being really lazy, e.g.
> Paste the image into the AI tool, read the suggested location, and move on.
> Ask Gemini, “Who runs this domain?” and accept the top-line answer.
I bet any OSINT person would have had my name and contact in half an hour.
Eventually, Brazil (1985) happens, to the detriment of Archibald [B]uttle, where everyone gives unquestionable trust to a flawed system.
Techne is the Greek word for HAND.
class ParsedAddress(BaseModel):
street: str | None
postcode: str | None
city: str | None
province: str | None
country_iso2: str | None
Response:{
"street": "Boulevard",
"postcode": 12345,
"city": "Cannot be accurately determined from the input",
"province": "MY and NY are both possible in the provided address",
"country_iso2": "US"
}Sure, I can spend 2 days trying out different models and tweaking the prompts and see which one gets it, but I have 33 billion other addresses and a finite amount of time.
The issue occurs in OSINT as well: A well structured answer lures people into a mental trap. Anthropomorphism is something humans have fallen for since the dawn of mankind and is doing so yet again with AI. The thought that you have someone intelligent nearby with god-like abilities can be comforting but... Um... LLMs don't work like that.