Posted by Hard_Space 10/25/2024
2039 is really hard to foresee.
It’s not about whether the present AI bubble is a giant sham that’ll implode like a black hole, or if it’s the real deal and AGI is days away. It’s always been about framing the consequences of either outcome in a society where 99.9% of humans must work to survive, and the implications of a giant economic collapse on those least-able to weather its impact in a time of record wealth and income inequality. It doesn’t matter which side is right, because millions of people will lose their jobs as a result and thousands will die. That is what some of us are trying to raise the alarm about, and in a post-2008 collapse world, young people are absolutely going to do what it takes to survive rather than what’s best for society - which is going to have knock-on effects like the author described (labor shortages in critical fields) for decades to come.
In essence, if one paper from one guy managed to depress enrollment in a critical specialty for medicine, and he was flat-out wrong, then imagine the effects the Boosters and Doomers are having on other fields in the present cycle. If I were a betting dinosaur, I suspect there will be a similar lack of highly-skilled programmers and knowledge workers due to the “copilot tools will replace programmers” hype of present tools, and then we’ll have these same divorced-from-society hypesters making the same predictions again, ignorant of the consequences of their prior actions.
Which is all a lot of flowery language just to say, “maybe we need to stop evaluating these things solely as ways to remove humans from labor or increase profit margins, and instead consider the broader effects of our decisions before acting on them.”
While I don't think AI will be be banned wholesale from, say, interstate commerce despite it really should until the industry self regulates the moment it starts killing people in medicine there it will be banned. And it's obvious it'll kill people. For eg in radiology it'll add its bullshit to images which is just not there. And so forth. It's just a question of how and when , not if.
No dystopian novel has predicted how fucking mundane it'll be when AI begins to kill people.
Assuming you’re referring to this story, that’s a stretch: https://www.nytimes.com/2024/10/23/technology/characterai-la...
“His stepfather’s .45 caliber handgun” being unlocked and available to him seems like the proximate cause.
Have you heard the one about suicide in the UK reducing when they switched to non-toxic cooking gas? https://pmc.ncbi.nlm.nih.gov/articles/PMC478945/pdf/brjprevs...
Not hard to propose that difficulty of killing / dying does decrease likelihood of killing / dying. Lock up your guns.
https://www.ncbi.nlm.nih.gov/books/NBK374099/#:~:text=We%20h...
That said, stuff like this definitely makes me think there's a lot of people on the margin who would be affected by what a chatbot says. Even a much older model like ELIZA.
Yeah, that's the problem. Tech bros high on the AI kool-aid think the problem is not the chatbot driving him to suicide but having access to a gun.
Though in this case, I'd point fingers at "American culture" rather than "tech bros":
As the Onion says: https://en.wikipedia.org/wiki/%27No_Way_to_Prevent_This,%27_...
Guns are pretty much irrelevant to the story, that's my point
The focus of the story is an AI chatbot murdering a human
Well, that's not the right words, we do not even have the right words for this because murder presumes intent and obviously there's no intent
An automated plagiarism machine has spewed such bullshit which pushed a teen to suicide. Perhaps that's the right wording.
Easy access to firearms is totally in the same category as easy access to painkillers to overdose on, and has been demonstrated by similar research: https://pmc.ncbi.nlm.nih.gov/articles/PMC4566524/#:~:text=Fi....
I have opined before that I would be surprised if ChatGPT specifically has caused the deaths of fewer than 200 people, just by giving bad advice — that's one in a million of the users, and I think that's plausible given the story of a different chatbot almost leading to botulism poisoning.
I also think that the general internet should have an 18 rating: to the extent that the argument for film classifications is valid, and that the internet is too big to be classified and also definitely contains examples of things beyond the most extreme allowed in cinema etc., we should treat it like it's 18-only content except where demonstrably suitable for younger audiences.
Upshot of the article: we don't have enough radiologists because Geoffrey Hinton said in 2011 they wouldn't be needed.
Hinton have had an impact on people choosing radiology, I don't really know. But, in the US, demographers and old-guard types in the medical profession made a REALLY bad call much more than ten years ago, and cut the number of med school students they were calling for, and we all are living with the consequences.
If you're going to complain about Hinton you should at least normalize for the lack of doctors in other fields that he didn't call out as well. Boomers are consuming a LOTT of medical, and in the US we have knock-on effects from the opioid crisis. It was not a good idea to cut the physician recruitment and enrollment when they did, whether they did for principled reasons (We only need so many and this way we'll have the best), or for ... less principled ones (A legion of cheap young doctors competing for our jobs doesn't sound that great to the physician's board at this research hospital).