Posted by walterbell 1 day ago
Assuming these professionals were great critical thinkers until the AI came along and changed that is a big stretch.
In my experience, the people who outsource their thinking to LLMs are the same people who outsourced their thinking to podcasts, news articles, Reddit posts, Twitter rants, TikTok videos, and other such sources. LLMs just came along and offered them opinions on demand that they could confidently repeat.
> The scary part is that many users still believed they were thinking critically, because GenAI made them feel smart
I don’t see much difference between this and someone who devours TikTok videos on a subject until they feel like an expert. Same pattern, different sources. The people who outsource their thinking and collect opinions they want to hear just have an easier way to skip straight to the conclusions they want now.
He's talking specifically about OSINT analysts. Are you saying these people were outsourcing their thinking to podcasts, etc. before AI came along? I have not heard anyone make that claim before.
These people absolutely were reading Reddit comments from a year ago to help them parse unfamiliar jargon in some document they found or make sense of what's going on in an image or whatever.
The LLM-only AI just hands you a fully-formed opinion with always-plausible-sounding reasons. There's no cognitive prompt to make you consider if it's wrong. I'm actually deliberately cultivating an instinctive negative distrust of LLM-only AI and would suggest it to other people because even though it may be too critical on a percentage basis, you need it as a cognitive hack to remember that you need to check everything coming out of them... not because they are never right but precisely because they are often right, but nowhere near 100% right! If they were always wrong we wouldn't have this problem, and if they were just reliably 99.9999% right we wouldn't have this problem, but right now they sit in that maximum danger zone of correctness where they're right enough that we cognitively relax after a while, but they're nowhere near right enough for that to be OK on any level.
I've encountered enough instances in subjects I am familiar with where the "I'm 14 and I just googled it for you" solution that's right 51% of the time and dangerously wrong the other 49 is highly upvoted and the "so I've been here before and this is kind of nuanced with a lot of moving pieced, you'll need to understand the following X, the general gist of Y is..." type take that's more correct is highly downvoted that I feel justified in making the "safe" assumption that this is how all subjects work.
On one hand at least Reddit shows you the downvoted comment if you look and you can go independently verify what they have to say.
But on the other hand the LLM is instant and won't screech at you if you ask it to cite sources.
It's always more comfortable for people to blame the thing rather than the person.
Screaming "no one is evil, its just markets!" probably helps people who base their lives on exploiting the weak sleep better at night.
You can look to the prohibition period for historical analogies with alcohol, plenty of enterprising humans there.
Basically it's a response to regulatory reality, little different from soy wire insulation in automobiles. I'm sure they'd love to deliver pure opium and wire rodents don't like to eat but that's just not possible while remaining in the black.
Malaise exists at an individual level, but it doesn't transform into social malaise until someone comes in to exploit those people's addictions for profit.
Here's the CIA's perspective on this subject.[1] The US intelligence community has a generative AI system to help analyze open source intelligence. It's called OSIRIS.[2] There are some other articles about it. The previous head of the CIA said the main use so far is summarization.
The original OSINT operation in the US was the Foreign Broadcast Monitoring Service from WWII. All through the Cold War, someone had to listen to Radio Albania just in case somebody said something important. The CIA ran that for decades. Its descendant is the current open source intelligence organization. Before the World Wide Web, they used to publish some of the summaries on paper, but as people got more serious about copyright, that stopped.
DoD used to publish The Early Bird, a daily newsletter for people in DoD. It was just reprints of articles from newspapers, chosen for stories senior leaders in DoD would need to know about. It wasn't supposed to be distributed outside DoD for copyright reasons, but it wasn't hard to get.
[1] https://www.cia.gov/resources/csi/static/d6fd3fa9ce19f1abf2b...
[2] https://apnews.com/article/us-intelligence-services-ai-model...
Sometimes this is just sloppy methodology. Other times it is intentional.
... or just to know what they seem to be thinking, which is also important.
• Instead of forming hypotheses, users asked the AI for ideas.
• Instead of validating sources, they assumed the AI had already done so.
• Instead of assessing multiple perspectives, they integrated and edited the AI’s summary and moved on.
This isn’t hypothetical. This is happening now, in real-world workflows.
"""
Amen, and OSINT is hardly unique in this respect.
And implicitly related, philosophically:
Yes, thars a part of why AI has its bad rep. It has uses to streamline workflow but people are treating it like an oracle. When it very very very clearly is not.
Worse yet, people are just being lazy with it. It's the equi talent of googling a topic and pasting the lede of the Wikipedia article. Which is tasteless, but still likely to be more right than an unfiltered LLM output
What Dutch OSINT Guy was saying here resonates with me for sure - the act of taking a blurry image into the photo editing software, the use of the manipulation tools, there seems to be something about those little acts that are an essential piece of thinking through a problem.
I'm making a process flow map for the manufacturing line we're standing up for a new product. I already have a process flow from the contract manufacturer but that's only helpful as reference. To understand the process, I gotta spend the time writing out the subassemblies in Visio, putting little reference pictures of the drawings next to the block, putting the care into linking the connections and putting things in order.
Ideas and questions seem to come out from those little spaces. Maybe it's just letting our subconscious a chance to speak finally hah.
L.M. Sacasas writes a lot about this from a 'spirit' point of view on [The Convivial Society](https://theconvivialsociety.substack.com/) - that the little moments of rote work - putting the dishes away, weeding the garden, the walking of the dog, these are all essential part of life. Taking care of the mundane is living, and we must attend to them with care and gratitude.
https://news.ycombinator.com/item?id=43303755
I'm proud to see it evolving in the wild, this version is better. Or you know it could just be in the zeitgeist.
But it's worse, A group working together poorly isn't smarter than the fastest participant in the group.
but intelligence as a whole is based on a human ego of being intellectually superior
Much good (and bad) sci-fi was written about this. In it, usually this leads to some massive conflict that forces humans to admit machines as equals or superiors.
If we do develop super-intelligence or consciousness in machines, I wonder how that will all go in reality.
For example, human and biological based goals are around self-preservation and propagation. And this in turn is about resource appropriation to facilitate that, and systems of doing that become wealth accumulation. Species that don't do this don't continue existing.
A different branch of evolution of intelligence may take a different approach, that allows its affects to persist anyway.
Since then, it's become clear that much life in the deep sea is anaerobic, doesn't use phosphorus, and may thrive without sunlight.
Sometimes anthropocentrism blinds us. It's a phenomenon that's quite interesting.
> You upload a protest photo into a tool like Gemini and ask, “Where was this taken?”
> It spits out a convincing response: “Paris, near Place de la République.” ...
> But a trained eye would notice the signage is Belgian. The license plates are off.
> The architecture doesn’t match. You trusted the AI and missed the location by a country.
Okay. So let's say we proceed with the recommendation in the article and interrogate the GenAI tool. "You said the photo was taken in Paris near Place de la République. What clues did you use to decide this?" Say the AI replies, "The signage in the photo appears to be in French. The license plates are of European origin, and the surrounding architecture matches images captured around Place de la République."
How do I know any better? Well, I should probably crosscheck the signage with translation tools. Ah, it's French but some words are Dutch. Okay, so it could be somewhere else in Paris. Let's look into the license plate patterns...
At what point is it just better to do the whole thing yourself? Happy to be proven wrong here, but this same issue comes up time and time again with GenAI involved in discovery/research tasks.
EDIT: Maybe walk through the manual crosschecks hand-in-hand? "I see some of the signage is in Dutch, such as the road marking in the center left of the image. Are you sure this image is near Place de la République?" I have yet to see this play out in an interactive session. Maybe there's a recorded one out there...
A further extension to the AI "conversation" might be: "What other locations are similar to this?" And "Why isn't it those locations?" Which you can then cross reference again.
Using AI as an entry point into massive datasets (like millions of photos from around the world) is actually useful. Correlation is what AI is good, but not infallible, at.
Of course false correlations exist and correlation is not causation but if you can narrow your search space from the entire world to the Eiffel tower in Paris or in Vegas you're ahead of the game.
For example, I am learning Rust, for quite awhile now. While AI has been very helpful in lowering the bar to /begin/ learning Rust, it's making it slower to achieve a working competence with it, because I always seem reliant on the LLM to do the thinking. I think I will have to turn off all the AI and struggle struggle struggle, until I don't, just like the old days.
The real power I've found is using it as a tutor for my specific situation. "How do list comprehensions work in Python?" "When would I use a list comprehension?" "What are the performance implications?" Being able to see the answers to these with reference to the code on my screen and in my brain is incredibly useful. It's far easier to relate to the business logic I care about than class Foo and method Bar.
Regarding retention, LLMs still doesn't hold a candle to properly studying the problem with (well-written) documentation or educational materials. The responsiveness however makes it a close second for overall utility.
ETA: This is regarding coding problems specifically. I've found LLMs fall apart pretty fast on other fields. I was poking at some astrophysics stuff and the answers were nonsensical from the jump.
But if you're not digesting the why of the technique vis a vis the how of what is being done, then not only are you gaining nothing but a check mark in a todo list item's box, but you're quite likely introducing bugs into your code.
I used SO yesterday (from a general DDG search) to help me learn how to process JSON with python. I built up my test script from imports to processing a string to processing a file to dump'ing it to processing specific elements to iterating through it a couple of different ways.
Along the way, I made some mistakes, which were very helpful in leveling-up my python skills. At the end, not only did my script work, but I had gained a level of skill at my craft for a very specific use-case.
There are no shortcuts to up-leveling oneself, my friend, not in any endeavor, but especially not in programming, which may well be the most difficult job on the planet, given its ubiquity and overall lack of quality.
I don't really like the way LLMs code. I like coding. So I mostly do that myself.
However I find it enormously useful to be able to ask an LLM questions. You know the sort of question you need to ask to build an intuition for something? Where it's not a clear problem answer type question you could just Google. It's the sort of thing where you'd traditionally have to go hunt down a human being and ask them questions? LLMs are great at that. Like if I want to ask, what's the point of something? An LLM can give me a much better idea than reading its Wikipedia page.
This sort of personalized learning experience that LLMs offer, your own private tutor (rather than some junior developer you're managing) is why all the schools that sit kids down with an LLM for two hours a day are crushing it on test scores.
It makes sense if you think about it. LLMs are superhuman geniuses in the sense of knowing everything. So use them for their knowledge. But knowing everything is distracting for them and, for performance reasons, LLMs tend to do much less thinking than you do. So any work where effort and focus is what counts the most, you're better off doing that yourself, for now.
But AI will review it and then you only have to .. oh
But AI will review AI and then you .. oh ..
2. Most analysts in a formal institution are professionally trained. In Europe, Canada and some parts of the US it's a profession with degree and training requirements. Most analysts have critical thinking skills, for sure the good ones.
3. OSINT is much more accessible because the evidence ISN'T ALWAYS controlled by a legal process so there are a lot of people who CAN be OSINT analysts or call themselves that and are not professionally trained. They are good at getting results from Google and a handful of tools or methods.
4. MY OPINION: The pressure to jump to conclusions in AI whether financially motivated or not comes from perceived notion that with technology everything should be faster and easier. In most cases it is, however, just as technology is increasing so is the amount of data. So you might not be as efficient as those around you expect, especially if they are using expensive tools, so there will be pressure to give into AI's suggestions.
5. MY OPINION: OSINT and analysis is a Tradecraft with a method. OSINT with AI makes things possible that weren't possible before or took way too much time for it to be worth it. Its more like, here are some possible answers where there were none before. Your job is to validate it now and see what assumptions have been made.
6. These assumptions have existed long before AI and OSINT. I seen many cases where we have multiple people look at evidence to make sure no one is jumping to conclusions and to validate the data. MY OPNION: So this lack of critical thinking might also be because there are less people or passes to validate the data.
7. Feel Free to ask me more.
Is it unfair to make such demands for the inclusion of 101-level stuff in non-programming content, or is it unfair to give IT topics a pass? Which approach fosters a community of winners and which one does the opposite? I'm confident that you can work it out.
So TLS and URL get a pass, BSD’s and MCP need to be defined at least once.
For example, suppose a person shares a photo online, and your intelligence objective is to find where they are. In that case, you might use GPS coordinates in the photo metadata or a famous landmark visible in the image to achieve your goal.
This is just for others who are curious.
Offline version: https://www.kiwix.org
That doesn't actually work though. Try to set it up and it just fails to download.
There is a whole training section right there like you just didn’t feel like clicking on it
If anything you’ve just pointed out how over reliance on AI is weakening your ability to search for relevant information