Posted by walterbell 2 days ago
Eventually, Brazil (1985) happens, to the detriment of Archibald [B]uttle, where everyone gives unquestionable trust to a flawed system.
Techne is the Greek word for HAND.
class ParsedAddress(BaseModel):
street: str | None
postcode: str | None
city: str | None
province: str | None
country_iso2: str | None
Response:{
"street": "Boulevard",
"postcode": 12345,
"city": "Cannot be accurately determined from the input",
"province": "MY and NY are both possible in the provided address",
"country_iso2": "US"
}Sure, I can spend 2 days trying out different models and tweaking the prompts and see which one gets it, but I have 33 billion other addresses and a finite amount of time.
The issue occurs in OSINT as well: A well structured answer lures people into a mental trap. Anthropomorphism is something humans have fallen for since the dawn of mankind and is doing so yet again with AI. The thought that you have someone intelligent nearby with god-like abilities can be comforting but... Um... LLMs don't work like that.
To pick an extreme example, programmers using a strongly typed language might not bother manually checking for potential type errors in their code and leave it to the type checker to catch them. If the type checker turns out to be buggy then their code may fail in production due to their sloppiness. However, we expect the code to eventually be free of type errors to a superhuman extent because they are using a tool that is strong to cover their personal weaknesses.
AI isn't as provably correct as type checkers, but they're pretty good at critical thinking (superhuman compared to the average HN argument) and human analysts must also routinely leave a trail of mistakes in their wake. The real question is what influence the AI has on the quality and I don't see why the assumption is that it is negative. It might well be; but the article doesn't seem to go into that in any depth.
It is, but it adds disingenuous apologetic.
Not wishing to pick on this particular author, or even this particular topic, but it follows a clear pattern that you can find everywhere in tech journalism:
Some really bad thing X is happening. Everyone knows X is happening.
There is evidence X is happening, But I am *not* arguing against X
because that would brand me a Luddite/outsider/naysayer.... and we
all know a LOT of money and influence (including my own salary)
rests on nobody talking about X.
Practically every article on the negative effects of smartphones or
social media printed in the past 20 years starts with the same chirpy
disavowal of the authors actual message. Something like;"Smartphones and social media are an essential part of modern life today... but"
That always sounds like those people who say "I'm not a racist, but..."
Sure, we get it, there's a lot of money and powerful people riding on "AI". Why water down your message of genuine concern?
Maybe what I'm getting at is this [0] poem of Taylor Mali. Somehow we all lost our nerve to challenge really, really bad things, wrapping up messages in tentative language. Sometimes that's a genuine attempt at balance, or honesty. But often these days I feel an author is trying too hard to distance themself from ... from themself.
It's a a silly bugbear, I know.
[0] https://taylormali.com/poems/totally-like-whatever-you-know/
It’s not. It’s a rant against people and their laziness and gullibility.