Top
Best
New

Posted by walterbell 4/3/2025

The slow collapse of critical thinking in OSINT due to AI(www.dutchosintguy.com)
446 points | 231 comments
Aurornis 4/4/2025|
> Participants weren’t lazy. They were experienced professionals.

Assuming these professionals were great critical thinkers until the AI came along and changed that is a big stretch.

In my experience, the people who outsource their thinking to LLMs are the same people who outsourced their thinking to podcasts, news articles, Reddit posts, Twitter rants, TikTok videos, and other such sources. LLMs just came along and offered them opinions on demand that they could confidently repeat.

> The scary part is that many users still believed they were thinking critically, because GenAI made them feel smart

I don’t see much difference between this and someone who devours TikTok videos on a subject until they feel like an expert. Same pattern, different sources. The people who outsource their thinking and collect opinions they want to hear just have an easier way to skip straight to the conclusions they want now.

karaterobot 4/4/2025||
> In my experience, the people who outsource their thinking to LLMs are the same people who outsourced their thinking to podcasts, news articles, Reddit posts, Twitter rants, TikTok videos, and other such sources

He's talking specifically about OSINT analysts. Are you saying these people were outsourcing their thinking to podcasts, etc. before AI came along? I have not heard anyone make that claim before.

potato3732842 4/4/2025||
Having a surface level understanding of what you're looking at is a huge part of OSINT.

These people absolutely were reading Reddit comments from a year ago to help them parse unfamiliar jargon in some document they found or make sense of what's going on in an image or whatever.

jerf 4/4/2025|||
At least if you're on reddit you've got a good chance of Cunningham's Law[1] giving you a chance at realizing it's not cut and dry. In this case, I refer to what you might call a reduced-strength version of Cunningham's Law, which I would phrase as "The best way to get the right answer on the Internet is not to ask a question; it's to post what someone somewhere thinks is the wrong answer." my added strength reduction in italics. At least if you stumble into a conversation where people are arguing it is hard to avoid needing to apply some critical thought to the situation to parse out who is correct.

The LLM-only AI just hands you a fully-formed opinion with always-plausible-sounding reasons. There's no cognitive prompt to make you consider if it's wrong. I'm actually deliberately cultivating an instinctive negative distrust of LLM-only AI and would suggest it to other people because even though it may be too critical on a percentage basis, you need it as a cognitive hack to remember that you need to check everything coming out of them... not because they are never right but precisely because they are often right, but nowhere near 100% right! If they were always wrong we wouldn't have this problem, and if they were just reliably 99.9999% right we wouldn't have this problem, but right now they sit in that maximum danger zone of correctness where they're right enough that we cognitively relax after a while, but they're nowhere near right enough for that to be OK on any level.

[1]: https://en.wikipedia.org/wiki/Ward_Cunningham#Law

potato3732842 4/4/2025||
What you're describing for Reddit is farcically charitable except in cases where you could just google it yourself. What you're describing for the LLM is what Reddit does when any judgement is involved.

I've encountered enough instances in subjects I am familiar with where the "I'm 14 and I just googled it for you" solution that's right 51% of the time and dangerously wrong the other 49 is highly upvoted and the "so I've been here before and this is kind of nuanced with a lot of moving pieced, you'll need to understand the following X, the general gist of Y is..." type take that's more correct is highly downvoted that I feel justified in making the "safe" assumption that this is how all subjects work.

On one hand at least Reddit shows you the downvoted comment if you look and you can go independently verify what they have to say.

But on the other hand the LLM is instant and won't screech at you if you ask it to cite sources.

iszomer 4/4/2025||
That is why it is ideal to ask it double-sided questions to test its biases as well as your own. Simply googling it is not enough when most people don't think to customize their search anyway, compounded by the fact that indexed sources may have changed or have been deprecated over time.
smcin 4/15/2025||
Any tips or tricks in how to phrase double-sided questions for best results?
throwaway29812 4/4/2025|||
[dead]
low_tech_love 4/4/2025|||
The pull is too strong, especially when you factor in the fact that (a) the competition is doing it and (b) the recipients of such outcomes (reports, etc) are not strict enough to care whether AI was used or not. In this situation, no matter how smart you are, not using the new tool of the trade would be basically career suicide.
raducu 4/6/2025|||
> people who outsource their thinking to LLMs.

OSINT I immagine would be kind of useless to analize with LLMs because the kind of information you're interested in is very new so not enough sources for the LLMs to regurgitate.

As an example -- I read some defence articles about Romania operating in the future 70 F-16 and it immediately caught my eye because I was expecting in the 40s range. Apparently the Nerherlands will leave those 18 F-16s to Romania -- but I'm not that curious to dig into enough -- I was expecting those would go to Ukraine.

So just for fun I asked the question -- to Gemini 2.5 and Chat gpt -- "How many F-16s will Romania eventually operate" -- they all regurgitated the 40s number. I explicitly asked Gemini about the 18 F-16s from the Nerherlands and it kept its number estimate, saying those are for training purposes.

Only after I explicitly explained it my own knowledge did Gemini google it and confirm it.

Or I asked about the tethered FPVs in Ukraine and it told me those have very little impact. Only after I explicitly mentioned the recent russian successful Kursk counter-offensive did it acknowledge them.

torginus 4/4/2025|||
And these people in positions of 'responsibility' always need someone or something to point to when shit goes sideways so they might as well.
sirspacey 4/7/2025|||
I’ll be one to raise my hand and say this has been dramatically not the case for anyone I’ve introduced AI to or myself.

Significantly more informed and reasoned.

jart 4/4/2025||
Yeah it's similar to how Facebook is blamed for social malaise. Or how alcohol was blamed before that.

It's always more comfortable for people to blame the thing rather than the person.

InitialLastName 4/4/2025|||
More than one thing can be causing problems in a society, and enterprising humans of lesser scruples have a long history of preying on the weaknesses of others for profit.
jart 4/4/2025|||
Enterprising humans have a long history of giving people what they desire, while refraining from judging what's best for them.
ZYbCRq22HbJ2y7 4/4/2025|||
Ah yeah, fentanyl drug adulterers, what great benefactors of society.

Screaming "no one is evil, its just markets!" probably helps people who base their lives on exploiting the weak sleep better at night.

https://en.wikipedia.org/wiki/Common_good

jart 4/4/2025||
No one desires adulterated fentanyl.
ZYbCRq22HbJ2y7 4/4/2025|||
No one has desire for adulteration, but they have a desire for an opiate high, and are willing to accept adulteration as a side effect.

You can look to the prohibition period for historical analogies with alcohol, plenty of enterprising humans there.

harperlee 4/4/2025||||
Fentanyl adulterators, market creators and resellers certainly do, for higher margin selling and/or increased volume.
potato3732842 4/4/2025|||
The traffickers looking to pack more punch into each shipment that the government fails to intercept do.

Basically it's a response to regulatory reality, little different from soy wire insulation in automobiles. I'm sure they'd love to deliver pure opium and wire rodents don't like to eat but that's just not possible while remaining in the black.

collingreen 4/4/2025|||
This is fine statement on its own but a gross reply to the parent.
isaacremuant 4/5/2025|||
Worse than enterprising humans are authoritarian humans who want to tell others how they should live, usually also exempting themselves from their rules.

They also prey on the weaknesses of humans and social appearances to do things for a "greater good".

There's a problem and we 'must do something' and if you're against doing the something I propose youre evil and I'll label you.

The real mindfuck is that sometimes, an unscrupulous entrepreneur only has to play your "societal harm fighting" game through politicians and they get their way and we lose.

PeeMcGee 4/4/2025||||
I like the facebook comparison, but the difference is you don't have to use facebook to make money and survive. When the thing is a giant noisemaker crapping out trash that screws up everyone else's work (and thus their livelihood), it becomes a lot more than just some nuisance you can brush away.
friendzis 4/4/2025||
If you are in the news business you basically have to.
itishappy 4/4/2025||||
I think humans actually tend to prefer blaming individuals rather than addressing societal harms, but they're not in any way mutually exclusive.
jplusequalt 4/4/2025||||
Marketing has a powerful effect. Look at how the decrease in smoking coincided with the decrease in smoking advertisement (and now look at the uptick in vaping due to the marketing as a replacement for smoking).

Malaise exists at an individual level, but it doesn't transform into social malaise until someone comes in to exploit those people's addictions for profit.

jruohonen 4/3/2025||
"""

• Instead of forming hypotheses, users asked the AI for ideas.

• Instead of validating sources, they assumed the AI had already done so.

• Instead of assessing multiple perspectives, they integrated and edited the AI’s summary and moved on.

This isn’t hypothetical. This is happening now, in real-world workflows.

"""

Amen, and OSINT is hardly unique in this respect.

And implicitly related, philosophically:

https://news.ycombinator.com/item?id=43561654

cmiles74 4/3/2025||
Anyone using these tools would do well to take this article to heart.
mr_toad 4/4/2025||
I think there’s a lot of people who use these tools because they don’t like to read.
johnnyanmac 4/4/2025|||
>This isn’t hypothetical. This is happening now, in real-world workflows.

Yes, thars a part of why AI has its bad rep. It has uses to streamline workflow but people are treating it like an oracle. When it very very very clearly is not.

Worse yet, people are just being lazy with it. It's the equi talent of googling a topic and pasting the lede of the Wikipedia article. Which is tasteless, but still likely to be more right than an unfiltered LLM output

gneuron 4/4/2025||
Reads like it was written by AI.
Animats 4/4/2025||
The big problem in open source intelligence is not in-depth analysis. It's finding something worth looking at in a flood of info.

Here's the CIA's perspective on this subject.[1] The US intelligence community has a generative AI system to help analyze open source intelligence. It's called OSIRIS.[2] There are some other articles about it. The previous head of the CIA said the main use so far is summarization.

The original OSINT operation in the US was the Foreign Broadcast Monitoring Service from WWII. All through the Cold War, someone had to listen to Radio Albania just in case somebody said something important. The CIA ran that for decades. Its descendant is the current open source intelligence organization. Before the World Wide Web, they used to publish some of the summaries on paper, but as people got more serious about copyright, that stopped.

DoD used to publish The Early Bird, a daily newsletter for people in DoD. It was just reprints of articles from newspapers, chosen for stories senior leaders in DoD would need to know about. It wasn't supposed to be distributed outside DoD for copyright reasons, but it wasn't hard to get.

[1] https://www.cia.gov/resources/csi/static/d6fd3fa9ce19f1abf2b...

[2] https://apnews.com/article/us-intelligence-services-ai-model...

D_Alex 4/4/2025||
The really big problem in open source intelligence has been for some time that data to support just about anything can be found. OSINT investigations start with a premise, look for data that supports the premise and rarely look for data that contradicts it.

Sometimes this is just sloppy methodology. Other times it is intentional.

dughnut 4/4/2025||
I think OSINT makes it sound like a serious military operation, but I think political opposition research is a much more accurate term for this sort of thing.
B1FF_PSUVM 4/4/2025||
> listen to Radio Albania just in case somebody said something important

... or just to know what they seem to be thinking, which is also important.

euroderf 4/4/2025||
I got Radio Tirana once (1990-ish) on my shortwave. The program informed me something to the effect that that Albania is often known as the Switzerland of the Balkans because of its crystal-clear mountain lakes.
palmotea 4/3/2025||
One way to achieve superhuman intelligence in AI is to make humans dumber.
ryao 4/4/2025||
This reminds me of the guy who said he wanted computers to be as reliable as TVs. Then smart TVs were made and TV quality dropped to satisfy his goal.
SoftTalker 4/4/2025||
The TVs prior to the 1970s/solid state era were not very reliable. They needed repair often enough that "TV repairman" was a viable occupation. I remember having to turn on the TV a half hour before my dad got home from work so it would be "warmed up" so he could watch the evening news. We're still at that stage of AI.
ryao 4/4/2025||
The guy started saying it in the 80s or 90s when that issue had been fixed. Ge is the Minix guy if I recall correctly.
xrd 4/4/2025|||
If you came up with that on your own then I'm very impressed. That's very good. If you copied it, I'm still impressed and grateful you passed it on.
card_zero 4/4/2025|||
Raises hand

https://news.ycombinator.com/item?id=43303755

I'm proud to see it evolving in the wild, this version is better. Or you know it could just be in the zeitgeist.

xrd 4/4/2025||
I'll never forget you, card_zero.
BrenBarn 4/4/2025|||
What if ChatGPT came up with it?
palmotea 4/4/2025||
I don't use LLMs, because I don't want to let my biggest advantages atrophy.
MrMcCall 4/4/2025||
while gleefully watching the bandwagon fools repeatedly ice-pick themselves in the brain.
6510 4/4/2025|||
I thought: A group working together poorly isn't smarter than the smartest person in that group.

But it's worse, A group working together poorly isn't smarter than the fastest participant in the group.

trentlott 4/4/2025|||
That's a fascinatingly obvious idea and I'd like to see data that supports it. I assume there must be some.
6510 4/5/2025||
I understand but the bug is closed now.
jimmygrapes 4/4/2025|||
anybody who's ever tried to play bar trivia with a team should recognize this
tengbretson 4/4/2025|||
Being timid in bar trivia is the same as being wrong.
rightbyte 4/4/2025|||
What do you mean? You can protest against bad but fast answers and check another box with the pen.
boringg 4/4/2025|||
The cultural revolution approach to AI.
imoverclocked 4/3/2025|||
That’s only if our stated goal is to make superhuman AI and we use AI at every level to help drive that goal. Point received.
yieldcrv 4/3/2025||
Right, superhuman would be relative to humans

but intelligence as a whole is based on a human ego of being intellectually superior

caseyy 4/4/2025||
That’s an interesting point. If we created super-intelligence but it wasn’t anthropomorphic, we might just not consider it super-intelligent as a sort of ego defence mechanism.

Much good (and bad) sci-fi was written about this. In it, usually this leads to some massive conflict that forces humans to admit machines as equals or superiors.

If we do develop super-intelligence or consciousness in machines, I wonder how that will all go in reality.

yieldcrv 4/4/2025||
Some things I think about are how different the goals could be

For example, human and biological based goals are around self-preservation and propagation. And this in turn is about resource appropriation to facilitate that, and systems of doing that become wealth accumulation. Species that don't do this don't continue existing.

A different branch of evolution of intelligence may take a different approach, that allows its affects to persist anyway.

caseyy 4/4/2025||
This reminds me of the "universal building blocks of life" or the "standard model of biochemistry" I learned at school in the 90s. It held that all life requires water, carbon-based molecules, sunlight, and CHNOPS (carbon, hydrogen, nitrogen, oxygen, phosphorus and sulfur).

Since then, it's become clear that much life in the deep sea is anaerobic, doesn't use phosphorus, and may thrive without sunlight.

Sometimes anthropocentrism blinds us. It's a phenomenon that's quite interesting.

0hijinks 4/4/2025||
It sure seems like the use of GenAI in these scenarios is a detriment rather than a useful tool if, in the end, the operator must interrogate it to a fine enough level of detail that she is satisfied. In the author's Scenario 1:

> You upload a protest photo into a tool like Gemini and ask, “Where was this taken?”

> It spits out a convincing response: “Paris, near Place de la République.” ...

> But a trained eye would notice the signage is Belgian. The license plates are off.

> The architecture doesn’t match. You trusted the AI and missed the location by a country.

Okay. So let's say we proceed with the recommendation in the article and interrogate the GenAI tool. "You said the photo was taken in Paris near Place de la République. What clues did you use to decide this?" Say the AI replies, "The signage in the photo appears to be in French. The license plates are of European origin, and the surrounding architecture matches images captured around Place de la République."

How do I know any better? Well, I should probably crosscheck the signage with translation tools. Ah, it's French but some words are Dutch. Okay, so it could be somewhere else in Paris. Let's look into the license plate patterns...

At what point is it just better to do the whole thing yourself? Happy to be proven wrong here, but this same issue comes up time and time again with GenAI involved in discovery/research tasks.

EDIT: Maybe walk through the manual crosschecks hand-in-hand? "I see some of the signage is in Dutch, such as the road marking in the center left of the image. Are you sure this image is near Place de la République?" I have yet to see this play out in an interactive session. Maybe there's a recorded one out there...

MadnessASAP 4/4/2025|
The advantage of the AI in this scenario is the starting point. You now can start cross referencing signage, language, license plates, landmarks. To verify or disprove the conclusion.

A further extension to the AI "conversation" might be: "What other locations are similar to this?" And "Why isn't it those locations?" Which you can then cross reference again.

Using AI as an entry point into massive datasets (like millions of photos from around the world) is actually useful. Correlation is what AI is good, but not infallible, at.

Of course false correlations exist and correlation is not causation but if you can narrow your search space from the entire world to the Eiffel tower in Paris or in Vegas you're ahead of the game.

0hijinks 4/5/2025|||
The point about entry into massive, maybe untractable datasets makes sense to me. I get the usefulness of GenAI there for sure. Another commenter suggested:

>> Its more like, here are some possible answers where there were none before.

That's a good point. AI doesn't need to be an authority, but another way of generating leads, maybe when it would be time-intensive to do so yourself.

If it's not prohibitive to do the digging with a human, do that. Because if adequately trained and rested, the human will perform more reliably.

cowboylowrez 4/5/2025|||
yeah sort of like grilling an unreliable witness on the stand lol I like it.
LurkandComment 4/4/2025||
1. I've worked with analysts and done analysis for 20+ years. I have used Machine Learning with OSINT as far back as 2008 and use AI with OSINT today. I also work with many related analysts.

2. Most analysts in a formal institution are professionally trained. In Europe, Canada and some parts of the US it's a profession with degree and training requirements. Most analysts have critical thinking skills, for sure the good ones.

3. OSINT is much more accessible because the evidence ISN'T ALWAYS controlled by a legal process so there are a lot of people who CAN be OSINT analysts or call themselves that and are not professionally trained. They are good at getting results from Google and a handful of tools or methods.

4. MY OPINION: The pressure to jump to conclusions in AI whether financially motivated or not comes from perceived notion that with technology everything should be faster and easier. In most cases it is, however, just as technology is increasing so is the amount of data. So you might not be as efficient as those around you expect, especially if they are using expensive tools, so there will be pressure to give into AI's suggestions.

5. MY OPINION: OSINT and analysis is a Tradecraft with a method. OSINT with AI makes things possible that weren't possible before or took way too much time for it to be worth it. Its more like, here are some possible answers where there were none before. Your job is to validate it now and see what assumptions have been made.

6. These assumptions have existed long before AI and OSINT. I seen many cases where we have multiple people look at evidence to make sure no one is jumping to conclusions and to validate the data. MY OPNION: So this lack of critical thinking might also be because there are less people or passes to validate the data.

7. Feel Free to ask me more.

whatnow37373 4/4/2025|
1. I think you are onto something here.
sanarothe 4/4/2025||
I think there's something about the physical acts and moments of writing out or typing out the words, or doing the analysis, etc. Writing 'our', backspacing, then forward again. Writing out a word but skipping two letters ahead, crossing out, starting again. Stopping mid paragraph to have a sip of coffee.

What Dutch OSINT Guy was saying here resonates with me for sure - the act of taking a blurry image into the photo editing software, the use of the manipulation tools, there seems to be something about those little acts that are an essential piece of thinking through a problem.

I'm making a process flow map for the manufacturing line we're standing up for a new product. I already have a process flow from the contract manufacturer but that's only helpful as reference. To understand the process, I gotta spend the time writing out the subassemblies in Visio, putting little reference pictures of the drawings next to the block, putting the care into linking the connections and putting things in order.

Ideas and questions seem to come out from those little spaces. Maybe it's just letting our subconscious a chance to speak finally hah.

L.M. Sacasas writes a lot about this from a 'spirit' point of view on [The Convivial Society](https://theconvivialsociety.substack.com/) - that the little moments of rote work - putting the dishes away, weeding the garden, the walking of the dog, these are all essential part of life. Taking care of the mundane is living, and we must attend to them with care and gratitude.

pcj-github 4/4/2025||
This resonates with me. I feel like AI is making me learn slower.

For example, I am learning Rust, for quite awhile now. While AI has been very helpful in lowering the bar to /begin/ learning Rust, it's making it slower to achieve a working competence with it, because I always seem reliant on the LLM to do the thinking. I think I will have to turn off all the AI and struggle struggle struggle, until I don't, just like the old days.

imadethis 4/4/2025||
I've found the same effect when I ask the LLM to do the thinking for me. If I say "rewrite this function to use a list comprehension", I don't retain anything. It's akin to looking at Stack Overflow and copying the first result, or going through a tutorial that tells you what to write without ever explaining it.

The real power I've found is using it as a tutor for my specific situation. "How do list comprehensions work in Python?" "When would I use a list comprehension?" "What are the performance implications?" Being able to see the answers to these with reference to the code on my screen and in my brain is incredibly useful. It's far easier to relate to the business logic I care about than class Foo and method Bar.

Regarding retention, LLMs still doesn't hold a candle to properly studying the problem with (well-written) documentation or educational materials. The responsiveness however makes it a close second for overall utility.

ETA: This is regarding coding problems specifically. I've found LLMs fall apart pretty fast on other fields. I was poking at some astrophysics stuff and the answers were nonsensical from the jump.

MrMcCall 4/4/2025||
> It's akin to looking at Stack Overflow and copying the first result, or going through a tutorial that tells you what to write without ever explaining it.

But if you're not digesting the why of the technique vis a vis the how of what is being done, then not only are you gaining nothing but a check mark in a todo list item's box, but you're quite likely introducing bugs into your code.

I used SO yesterday (from a general DDG search) to help me learn how to process JSON with python. I built up my test script from imports to processing a string to processing a file to dump'ing it to processing specific elements to iterating through it a couple of different ways.

Along the way, I made some mistakes, which were very helpful in leveling-up my python skills. At the end, not only did my script work, but I had gained a level of skill at my craft for a very specific use-case.

There are no shortcuts to up-leveling oneself, my friend, not in any endeavor, but especially not in programming, which may well be the most difficult job on the planet, given its ubiquity and overall lack of quality.

jart 4/4/2025|||
Try using the LLM as a learning tool, rather than asking it to do your job.

I don't really like the way LLMs code. I like coding. So I mostly do that myself.

However I find it enormously useful to be able to ask an LLM questions. You know the sort of question you need to ask to build an intuition for something? Where it's not a clear problem answer type question you could just Google. It's the sort of thing where you'd traditionally have to go hunt down a human being and ask them questions? LLMs are great at that. Like if I want to ask, what's the point of something? An LLM can give me a much better idea than reading its Wikipedia page.

This sort of personalized learning experience that LLMs offer, your own private tutor (rather than some junior developer you're managing) is why all the schools that sit kids down with an LLM for two hours a day are crushing it on test scores.

It makes sense if you think about it. LLMs are superhuman geniuses in the sense of knowing everything. So use them for their knowledge. But knowing everything is distracting for them and, for performance reasons, LLMs tend to do much less thinking than you do. So any work where effort and focus is what counts the most, you're better off doing that yourself, for now.

eschaton 4/4/2025|||
Why are you using an LLM at all when it’ll both hamper your learning and be wrong?
dwaltrip 4/4/2025||
> While AI has been very helpful in lowering the bar to /begin/ learning Rust
whatnow37373 4/4/2025|||
The world will slowly, slowly converge on this but not before many years of hyping and preaching about how this shit is the best thing since sliced bread and shoving it into our faces all day long, but in the meantime I suggest we be mindful of our AI usage and keep our minds sharp. We might be the only ones left after a decade or two of this.
neevans 4/4/2025||
Nah you are getting it wrong the issue here is YOU NO LONGER NEED TO LEARN RUST thats why you are learning it slow.
whatnow37373 4/4/2025||
Yeah. AI will write Rust and then you only have to review .. oh.

But AI will review it and then you only have to .. oh

But AI will review AI and then you .. oh ..

treyfitty 4/3/2025||
Well, if I want to first understand the basics, such as “what do the letters OSINT mean,” I’d think the homepage (https://osintframework.com/) would tell me. But alas, it does not, and a simple chatgpt query would have told me the answer without the wasted effort.
OgsyedIE 4/3/2025||
Similar criticisms that outsiders need to do their own research to acquire foundational-level understanding before they start on the topic can be made about other popular topics on Hn that frequently use abbreviations, such as TLS, BSDs, URL and MCP, but somehow those get a pass.

Is it unfair to make such demands for the inclusion of 101-level stuff in non-programming content, or is it unfair to give IT topics a pass? Which approach fosters a community of winners and which one does the opposite? I'm confident that you can work it out.

Aeolun 4/4/2025||
I think if I can expect my mom to know what it is, I shouldn’t have to define it in articles any more.

So TLS and URL get a pass, BSD’s and MCP need to be defined at least once.

ChadNauseam 4/4/2025|||
Your mom knows what TLS is? I'm not even sure that more than 75% of programmers do.
pixl97 4/4/2025||
If programmers had a character sheet it would state they have a -50% penalty to any security concepts.
jonjojojon 4/4/2025||||
Does your mom really know what TLS means? I would guess that even "tech savvy" members of the general public don't.
inkcapmushroom 4/4/2025|||
Relevant XKCD: https://xkcd.com/2501/
caseyy 4/4/2025|||
OSINT = open source intelligence. It’s the whole of openly accessible data fragments about a person or item of interest, including their use for intelligence-gathering objectives.

For example, suppose a person shares a photo online, and your intelligence objective is to find where they are. In that case, you might use GPS coordinates in the photo metadata or a famous landmark visible in the image to achieve your goal.

This is just for others who are curious.

walterbell 4/3/2025|||
GPU-free URL: https://en.wikipedia.org/wiki/OSINT

Offline version: https://www.kiwix.org

lmm 4/4/2025||
> Offline version: https://www.kiwix.org

That doesn't actually work though. Try to set it up and it just fails to download.

walterbell 4/4/2025||
On which platform? It's a mature project that has been working for years on desktops and phones, with content coverage that has expanded beyond wikipedia, e.g. stackoverflow archives. Downloadable from the nearest app store.
jrflowers 4/4/2025|||
Volunteering “I give up if the information I want isn’t on the first page of the first website that I think of” in a thread about AI tools eroding critical thinking isn’t the indictment of the site that you linked to that you think it is.

There is a whole training section right there like you just didn’t feel like clicking on it

hmcq6 4/3/2025|||
The OSINT framework isn’t meant to be an intro to OSINT. This is like getting mad that https://planningpokeronline.com/ doesn’t explain what Kanban is.

If anything you’ve just pointed out how over reliance on AI is weakening your ability to search for relevant information

dullcrisp 4/3/2025|||
Ironically, my local barber shop also wouldn't explain to me what OSINT stands for.
Daub 4/4/2025|||
There is a lot to be said for the academic tradition of only using an acronym/abbreviation after you have first used the complete term.
nkrisc 4/4/2025|||
https://duckduckgo.com/?q=osint
sn9 4/5/2025||
Why wouldn't you check wikipedia?

https://en.wikipedia.org/wiki/Open-source_intelligence

ridgeguy 4/3/2025|
I think this post isn't limited to OSINT. It's widely applicable, probably where AI is being adopted as a new set of tools.
ttyprintk 4/4/2025|
The final essay for my OSINT cert was to pick a side: critical thinking can/cannot be taught.
jncfhnb 4/5/2025||
Remember ~10 years ago when everyone was talking about how greatly valued liberal arts degrees were because they “taught critical thinking”?

Idk if it’s just because I’m no longer concerned with college but I sure haven’t heard that in a long time

ttyprintk 4/5/2025||
What changed my mind was the Signal group chat with Mike Waltz. Absolutely none of the statements from officials appeal to the critical thinking element of opsec. And he’s been employed in this role since the early 2000s. If he can’t learn it, then practically speaking, then it cannot be taught to the kind of person who gets elevated to that role.
More comments...