Top
Best
New

Posted by Hard_Space 10/25/2024

Geoffrey Hinton said machine learning would outperform radiologists by now(newrepublic.com)
52 points | 108 comments
napoleoncomplex 10/25/2024|
There is a tremendous share of medicine specialties facing shortages, and fear of AI is not a relevant trend causing it. Even the link explaining shortages in the above article is pretty clear on that.

I do agree with the article's author's other premise, radiology was one of those fields that a lot of people (me included) have been expecting to be largely automated, or at least the easy parts, as the author mentions, and that the timelines are moving slower than expected. After all, pigeons perform similarly well to radiologists: https://pmc.ncbi.nlm.nih.gov/articles/PMC4651348/ (not really, but it is basically obligatory to post this article in any radiology themed discussion if you have radiology friends).

Knowing medicine, even when the tech does become "good enough", it will take another decade or two before it becomes the main way of doing things.

bachmeier 10/25/2024||
The reason AI is hyped is because it's easy to get the first 80% or 90% of what you need to be a viable alternative at some task. Extrapolating in a linear fashion, AI will do the last 10-20% in a few months or maybe a couple years. But the low-hanging fruit is easy and fast. It may never be feasible to complete the last few percent. Then it changes from "AI replacement" to "AI assisted". I don't know much about radiology, but I remember before the pandemic one of the big fears was what we'd do with all the unemployed truck drivers.
candiddevmike 10/25/2024|||
> Extrapolating in a linear fashion, AI will do the last 10-20% in a few months or maybe a couple years

That's a hefty assumption, especially if you're including accuracy.

hnthrowaway6543 10/25/2024|||
> That's a hefty assumption, especially if you're including accuracy.

That's exactly what the comment is saying. People see AI do 80% of a task and assume development speed will follow a linear trend and the last 20% will get done relatively quickly. The reality is the last 20% is hard-to-impossible. Prime example is self-driving vehicles, which have been 80% done and 5 years away for the past 15 years. (It actually looks further than 5 years away now that we know throwing more training data at the problem doesn't fix it.)

ninetyninenine 10/25/2024||
We are at 95 percent now for self driving. I regularly use Waymo robot cars in sf. No driver. The gap is now just scaling.

Take what is 100% complete in one city and do it in another city.

Problem was solved… you just missed the boat.

hnthrowaway6543 10/25/2024||
Waymo barely works, with 24/7 monitoring by humans in a "fleet response" center[0], in 4 cities in the world. That's only 95% done if you're counting good enough for government work.

[0] https://waymo.com/blog/2024/05/fleet-response/

soco 10/28/2024||
The monitoring might be 24/7 but its reaction time is nothing usable in a life-and-death situation. Or I just cannot imagine a human being notified "I think I'm crashing into something" and able to take over and do anything of significance within that second to avoid the crash (except hitting on the brakes which the car could do just as well). So don't read too much into the response team, it has definitely its use but won't save you from plunging into that sinkhole who just appeared.
bdndndndbve 10/25/2024||||
OP is being facetious, the "last 20%" is a common saying implying that you've back-loaded the hard part of a task.
rsynnott 10/25/2024|||
That's their point, I think; since the 50s or so, people have been making this mistake about AI and AI-adjacent things, and it never really plays out. That last '10%' often proves to be _impossible_, or at best very difficult; you could argue that OCR has managed it, finally, at least for simple cases, but it took about 40 years, say.
edanm 10/25/2024||||
> The reason AI is hyped is because it's easy to get the first 80% or 90% of what you need to be a viable alternative at some task.

No, it's because if the promise of certain technologies is reached, it'd be a huge deal. And of course, that promise has been reached for many technologies, and it's indeed been a huge deal. Sometimes less than people imagine, but often more than the naysayers who think it won't have any impact at all.

pavel_lishin 10/25/2024|||
> Extrapolating in a linear fashion, AI will do the last 10-20% in a few months or maybe a couple years

Extrapolating in a linear fashion, in a few years my child will be ten foot tall, weight six hundred pounds, and speak 17 languages.

The first 90% is the easy part. It's the other 90% that's hard. People forget that, especially people who don't work in software/technology.

bonoboTP 10/25/2024|||
> There is a tremendous share of medicine specialties facing shortages

The supply of doctors is artificially strapped by the doctor cartel/mafia. There are plenty who want to enter but are prevented by artificial limits in the training.

mananaysiempre 10/25/2024|||
Medical professionals are highly paid, thus an education in medicine is proportionally expensive. An education in medicine is expensive, thus younger medical professionals need to be highly paid in order to afford their debt. Until the vicious cycle is here is broken (e.g. less accessible student loans? and more easily defaultable is one way to spell less accessible), things are not going to improve. And there’s also the problem that you want your doctors to be highly paid, because it’s a stressful, high-responsibility job with stupidly difficult education.
bonoboTP 10/25/2024||
US doctors are ridiculously overpaid compared to the rest of the developed world, such as the UK or western EU. There's no evidence that this translates to better care at all. It's all due to their regulatory capture. One possible outcome is that healthcare costs continue to balloon and eventually it pops and the mafia gets disbanded and more immigrant doctors will be allowed to practice, driving prices to saner levels.
tzs 10/25/2024||
How doctors are licensed in the US compared to western Europe might explain why health care costs are higher in the US, but it does not explain why health care costs are rising so much. That's because health care costs are rising at similar rates in western Europe (and most of the rest of the first world).

For example from 2000 to 2018 here's the ratio of per capita health care costs in 2018 to the costs in 2000 for several countries:

2.1 Germany 1.8 France 2.0 Canada 1.7 Italy 2.6 Japan 2.6 UK 2.3 US

Here's cost ratios over several decades compared to 1970 costs for the US, the UK, and France:

     1980 1990 2000 2010 2020
  US  3.2  8.2 13.9 24.1 36.3
  UK  3.1  6.3 15.3 27.8 40.5
  FR  3.4  7.6 14.9 21.1 28.5
Here's the same data showing the the cost ratio decade to decade instead of from 1970:

     1980 1990 2000 2010 2020
  US  3.2  2.6  1.7  1.7  1.5
  UK  3.1  2.0  2.4  1.8  1.5
  FR  3.4  2.2  2.0  1.4  1.4
My data source was https://data.oecd.org/healthres/health-spending.htm but it looks like data.oecd.org reorganized their site so that redirects to https://www.oecd.org/en/data/indicators/health-spending.html which seems to have the data but with a much more limited interface.
bonoboTP 10/25/2024||
Doctor salary is not the only or perhaps even the main factor in healthcare expensiveness, but taking on the overall cost disease in healthcare would broaden the scope too wide for this thread, I think.

Also, I admit that the balloon may in fact never pop, since one theory says that healthcare costs so much simply because it can. It just expands until it costs as much as possible but not more. I'm leaning towards accepting Robin Hanson's signaling-based logic to explain it.

singleshot_ 10/25/2024|||
The supply of doctors would be much greater if incompetent people were allowed into the training pathway, for sure.
bonoboTP 10/25/2024|||
Yep, this is precisely what they argue. They don't simply say they want to keep their high salary and status due to undersupply. They argue that it's all about standards, patient safety etc. In the US, even doctors trained in Western Europe are kept out or strangled with extreme bureaucratic requirements. Of course, again the purported argument is patient safety. As if doctors in Europe were less competent. Health outcome data for sure doesn't indicate that, but smoke and mirrors remain effective.
DaveExeter 10/25/2024|||
Bad news...they're already there!
infecto 10/25/2024|||
I wouldn’t dismiss the premise so quickly. Other factors certainly play a role, but I imagine that after 2016, anyone considering a career in radiology would have automation as a prominent concern.
randomdata 10/25/2024||
Automation may be a concern. Not because of Hinton, though. There is only so much time in the day. You don't become a leading expert in AI like Hinton has without tuning out the rest of the world, which means a random Average Joe is apt to be in a better position to predict when automation is capable of radiology tasks than Hinton. If an expert in radiology was/is saying it, then perhaps it is worth a listen. But Hinton is just about the last person you are going to listen to on this matter.
caeril 10/25/2024||
> even when the tech does become "good enough", it will take another decade or two before it becomes the main way of doing things.

What you're advocating for would be a crime against humanity.

Every four years, the medical industry kills a million Americans via preventable medical errors, roughly one third of which are misdiagnoses that were obvious in hindsight.

If we get to a point at which models are better diagnosticians than humans, even by a small margin, then delaying implementation by even one day will constitute wilful homicide. EVERY SINGLE PERSON standing in the way of implementation will have blood on their hands. From the FDA, to HHS, to the hospital administrators, to the physicians (however such a delay would play out) - every single one of them will be complicit in first-degree murder.

hggigg 10/25/2024||
That is because Mr Hinton is full of shit. He constantly overstates the progress to bolster his position on associated risks. And now someone gave him a bloody Nobel so he'll be even more intolerable.

What is behind the curtain is becoming obvious and while there are some gains in some specific areas, the ability for this technology as it stands today to change society is mostly limited to pretending to be a solution for usual shitty human behaviour of cost cutting and reducing workforce. For example IBM laying off people under the guise of AI when actually it was a standard cost cutting exercise with some marketing smeared over the top while the management told the remaining people to pick up the slack. A McKinsey special! And generating content which is to be consumed by people who can't tell the difference between humans and muck.

A fine example is from the mathematics side we are constantly promised huge gains in LLMs but we can't replace a flunked undergrad with anything yet. And this is because it is the wrong tool for the fucking job. Which is the problem in one simple statement: it's mostly the wrong tool for most things.

Still I enjoyed the investment ride! I could model that one with my brain fine.

auggierose 10/25/2024||
I hear existential despair. Mathematics is indeed a great example. Automation (AI) has made great strides in automating theorem proving in the last 30 years, and LLMs are just a cherry on top of that. A cherry though that will accelerate progress even further, by bringing the attention of people like Terence Tao to the cause. It will not change how mathematics is done within 2 years. It will totally revolutionise how mathematics is done within 20 years (and that is a conservative guess).
meroes 10/25/2024|||
ChatGPT and others can’t reliably sum several integers together still, when is an LLM going to replace anything meaningful in higher math?

I spent more time deciphering ChatGPT’s mistakes than programming something in python to sum some integers this week.

auggierose 10/25/2024||
I don't use ChatGPT/Claude/Gemini for coding stuff I know how to do. I use it to explore stuff I don't know. It is very useful for that. And that is how LLMs and their descendants will conquer mathematics. LLMs are for intuition, not for logic. We know how to do logic. We didn't know how to do intuition.
hggigg 10/25/2024|||
No it's a terrible example. We have ITPs and ATPs and any progress on ATPs, which are the ones that are currently considered magical, are turning into a fine asymptote. ITPs are useful however but have nothing to do with ML at all.

Putting a confident timescale on this stuff is like putting a timescale on UFT 50 years ago. A lie.

Oh and lets not forget that we need to start with a conjecture first and we can't get any machine to come up with anything even remotely new there.

There is no existential despair. At all.

auggierose 10/25/2024||
I said 30 years, because that's for how long I am a participant in the field. I have a PhD in ITP, and yes, I am pretty confident that automation is a big part of ITP now. Maybe not so much if you are into constructive type theory, that field is always a little bit hesitant to use the full power of anything.

The only one lying here is you. To yourself.

By the way, AI can come up with conjectures just fine. I might not be interested in most of them, but then again, I am not interested in most conjectures humans come up with.

jamal-kumar 10/25/2024||
I think my favorite thing is how people are out there reconsidering their programming careers and doing shit like quitting when you have a bunch of less than quality developers lazily deploying code they pulled out of the ass of an LLM which does no bounds checking, no input validation, none of that basic as can be security shit at all, they just end up deploying all that code to production systems facing the internet and then nobody notices because either the competent developers left for greener pastures or got laid off. I keep on telling people this is going to take like 5 or 10 years to untangle considering how hard it is in a lot of cases to fix code put into tons of stuff like embedded devices mounted up on a light pole or whatever that was considered good enough for a release, but I bet it'll be even more of a pain than that. These are after all security holes in devices that in many cases peoples day or life going well depends on.

What it has done so far for me is put copywriters out of a job. I still find it mostly useful for writing drivel that gets used as page filler or product descriptions for my ecommerce side jobs. Lately the image recognition capabilities lets me generate the kind of stuff I'd never write, which is instagram posts with tons of emojis and other things I'd never have the gumption to do myself but increases engagement. I actually used to use markov chain generator for this going back to 2016 though so the big difference here is that at least it can form more coherent sentences.

hggigg 10/25/2024|||
Oh I'm right in the middle of your first point. I'm going to have on my grave stone: "Never underestimate the reputation and financial damage a competent but lazy software developer can do with an incompetent LLM"

So what you're saying is that it's a good bullshit generator for marketing. That is fair :)

jamal-kumar 10/25/2024||
It's really excellent for that. I can give it an image of a product I'm trying to sell, and then say 'describe this thing but use a ton of marketing fluff to increase engagement and conversions' and it does exactly that.

There's tools out there to automatically do this with A/B testing and I think even stuff like shopify plugins, but I still do it relatively manually.

I'll use it for code sparingly for stuff like data transformations (Take this ridiculously flat, two table database schema and normalize it to the third normal form) but for straight up generating code to be executed I'm way more careful to make sure it isn't giving me or the people who get to use that code garbage. It isn't that bad for basic security audits actually and I suggest any devs reading this re-run their code through it with security related prompting to see if they missed anything obvious. The huge problem is that at least half of devs with deadlines to meet are apparently not doing this, and I get to see it in the drop in quality of pull requests within the past 5 years.

hggigg 10/25/2024||
I find it a distraction on validation tasks. I prefer to know what the hell I am doing first and be able to qualify that as I work with test cases.

With respect to other engineers I haven't found any of our ML tools actually pick up anything useful yet from a code review.

On engagement, I saw a negative recently where someone started replacing banner images for events with AI generated content and their conversion to tickets plummeted. YMMV. People can spot it a mile off and they are getting better at it. I think the end game is soon.

ben_w 10/26/2024|||
> On engagement, I saw a negative recently where someone started replacing banner images for events with AI generated content and their conversion to tickets plummeted. YMMV. People can spot it a mile off and they are getting better at it.

Indeed.

I like this stuff, I have fun with it, I have no problem with others making and sharing pics they like just as I myself do… but when I notice it in the wild on packaging or advertising, it always makes me wonder: what other corners are getting cut?

(20 years ago, similar deal but with product art that had 3mm pixels and noticeable jpeg artefacts)

> I think the end game is soon.

Dunno. The models are getting better as well as the consumers noticing more — I've not seen Cronenberg fingers in a while now — and I don't know who will max out first, or when.

jamal-kumar 10/25/2024|||
Were they trying to generate images for marketing creatives like that? People can notice what that looks like right away and it's repulsive for sure unless they can't tell. Text on the other hand is another story
620gelato 10/25/2024|||
[dead]
nopinsight 10/25/2024||
I wonder what experts think about this specialized model:

"Harrison.rad.1 excels in the same radiology exams taken by human radiologists, as well as in benchmarks against other foundational models.

The Fellowship of the Royal College of Radiologists (FRCR) 2B Rapids exam is considered one of the leading and toughest certifications for radiologists. Only 40-59% of human radiologists pass on their first attempt. Radiologists who re-attempt the exam within a year of passing score an average of 50.88 out of 60 (84.8%).

Harrison.rad.1 scored 51.4 out of 60 (85.67%)."

https://harrison.ai/harrison-rad-1/

bonoboTP 10/25/2024||
Hinton doesn't take social/power aspects into account, he's a naive technical person. Healthcare is an extremely strongly regulated field and its professionals are among those with the highest status in society. Radiology won't be among the first applications to be automated. I see many naive software/ML people trying to enter the medical domain, but they are usually eaten for breakfast by the incumbents. This is not an open market like taxis vs Uber or one entertainment app vs another. If you try to enter the medical domain, you will be deep deep into politics and lobbying and you're going to be defeated.

Better deploy it in fields like agriculture and manufacturing that are already in good part automated. Then later in another generation one can try the medical domain.

alephnerd 10/25/2024||
> Radiology won't be among the first applications to be automated

Radiology and a lot of other clinical labwork is heavily outsourced already, and has been for decades [0][1].

Much of the analysis and diagnosis is already done by doctors in India and Pakistan before being returned to the hospitals in the West.

In fact, this is how Apollo Group and Fortis Health (two of India and Asia's largest health groups) expanded rapidly in the 2000s.

It's the back office teleradiology firms that are the target customers for Radiology Agents, and in some cases are funding or acquiring startups in the space already.

This has been an ongoing saga for over a decade now.

[0] - https://www.jacr.org/article/S1546-1440(04)00466-1/fulltext

[1] - https://www.reuters.com/article/business/healthcare-pharmace...

Galanwe 10/25/2024|||
Very counter-intuitive, yet after reading the sources definitely factual. Thanks for pointing that out.
bonoboTP 10/25/2024||||
Outsourcing is indeed a force pushing against doctor power. This just means that radiologists are already under attack and may feel cornered, so they are on the defensive.

I'm not saying automation won't ever happen. But it will need to be slow, as to allow the elite doctor dynasties to recalibrate which specialties to send their kids to, and not to disrupt the already practicing ones. So Hinton's timeline was overly optimistic due to thinking only in terms of the tech. It will happen, but on a longer timescale, maybe a generation or so.

alephnerd 10/25/2024||
> But it will need to be slow, as to allow the elite doctor dynasties to recalibrate which specialties to send their kids to, and not to disrupt the already practicing ones

That's not how much of medicine from a business standpoint is run anymore.

Managed Service Organizations (MSOs) and PE consolidation has become the norm for much of the medical industry now because the running a medical practice AND practicing medicine at the same time is hard.

Managing insurance billing, patient records, regulatory paperwork, payroll, etc is an additional 20-30 hours of work on top of practicing as a doctor (which is around 30-50 hours as well).

Due to this, single practitioner clinics or partnership models get sold off and the doctor themselves gets a payout plus gets treated as an employee.

> It will happen, but on a longer timescale, maybe a generation or so

I agree with you that a lot of AI/ML application timelines are overhyped, but in Radiology specifically the transition has already started happening.

The outsourced imaging model has been the norm for almost 30 years now, and most of the players in this space began funding or acquiring startups in this space a decade ago already.

Is 100% automation in the next 5 to 10 years realistic? Absolutely not!

Is 30-50% automation realistic? I'd say so.

moralestapia 10/25/2024|||
Yeah, that's cool.

But it's not automation.

GP's point is that it will be really hard to take the specialist out of this process, mainly because of regulatory issues.

alephnerd 10/25/2024||
The point is, if you can substitute even 30-50% of the headcount used in initial diagnostics and analysis, you're profit margins grow exponentially.

The customers for these kinds of Radiology Agents are the teleradiology and clinical labwork companies, as well as MSOs and Health Groups looking to cut outside spend by bringing some subset back in-house.

100% automation is unrealistic for a generation, but 30-50% of teleradiology and clinical labwork headcount being replaced by CV applications is absolutely realistic.

A lot of players that entered the telemedicine segment during the COVID pandemic and lockdown have begun the pivot into this space, as well as the new-gen MSOs as well as legacy telemedicine organizations (eg. Apollo, Fortis).

(Also, why are you getting downvoted? You brought up a valid point of contention)

moralestapia 10/25/2024||
That's true.

You may always need to keep one or two guys in there but they will be able to do 20x more with the same effort. Definitely plausible.

jhbadger 10/25/2024|||
>This is not an open market like taxis vs Uber

Interesting choice of example. In most places taxis are/weren't an "open market" either but had various laws preventing newcomers from providing taxi rides. In many places what Uber did was illegal. Various locales either banned Uber, changed the laws to allow them, or simply didn't enforce the laws that made them illegal.

bonoboTP 10/25/2024|||
Yes, even Uber was pushed out in many places with lawfare and jobs reasons. Taxi drivers do have some moderately strong lobby in many places. So just think about taking on doctors... The discussion was never really about which service is better. It was about whose toes they were stepping on.

In healthcare you can't take the "asking for forgiveness is easier than permission" route. That's a quick way to jail. In taxiing, it's just some fines that fit the budget.

My overall point isn't to argue whether these disruptions are socially beneficial. I'm trying to point at the more neutral observation that who you're taking on and how much power they have will be crucial to the success of a disruptor. It's not just about "product quality" based on some objective dispassionate politics-free scientific benchmarks.

bko 10/25/2024||||
Medallion holders had a weaker more fragmented system. If there was a nation wide system, they would have been able to organize better and kill Uber. Considering groups like the AMA literally dictate how many people are allowed to graduate from med school in a given year, the obviously have more control.

Furthermore, on the state level there are often "Need Based Laws" that require healthcare providers to obtain approval before establishing or expanding certain healthcare facilities or services. This is unironically designed to prevent unnecessary duplication of healthcare services, control healthcare costs and ensure equitable distribution of healthcare resources. Imagine how expensive it would be if we have two many choices!

The cab cartel was more or less city level.

Add in the fact that "health care" is much more politically charged and its easy to find useful idiots that want to "protect people's health" that enforcement is a lot easier, they're entirely different.

musicale 10/26/2024||
Not all conspiracy theories are false.

Good thing most people don't know about this, or they might get really angry at the AMA, perhaps even voting for representatives who might enact laws to reform the system.

Fortunately that would never happen.

eszed 10/25/2024|||
In context, I think GP meant "open" from a sociopolitical point of view. The "taxi-industrial complex" had minimal political power, so Uber was repeatedly able to steamroll the incumbents - exactly as you describe in your final sentence. Medical interests have immense sociopolitical power, so the same playbook won't work. The differences in your positions are merely semantic.
0xpgm 10/25/2024|||
I wish there was less focus on AI replacing humans and more focus on AI helping humans do better i.e. Intelligence Augmentation. IA drove a lot of the rapid progress in the earlier days of computing that made computers usable by non-experts.

I suspect a lot of the thinking of replacing humans is driven a lot by the nerd scifi facination with the Singularity, but if a lot of the hype fails to materialize after billions of dollars have been poured into AI there could be an over-correction by the market that would take away funding from even useful AI research.

I hope we'll hear less of AI replacing X, and more of AI enhancing X, where X is radiologists, programmers, artists etc.

from-nibly 10/25/2024|||
People becoming slightly more productive helps people. People being replaced helps quarterly earnings reports.
musicale 10/26/2024||
People becoming slightly more productive also helps earnings reports.

But automation is all about reducing labor costs. Even shoddy automation such as self checkout.

I don't see this changing any time soon.

randomdata 10/25/2024|||
Replacing is better than helping, though. It frees people up to develop capital – the very source of progress you speak of! More capital development, more progress.

If you still have to do the same job, but better, then you aren't be freed up for a higher level purpose. If we had taken your stance in earlier days of human history we'd all still be out standing in the field. Doing it slightly better perhaps, but still there, "wasting" time.

No, we are clearly better off that almost every job from the past was replaced. And we will be better off when our current jobs are replaced so that our time is freed up so to move onto the next big thing.

You don't think we have reached the pinnacle of human achievement already, surely?

croes 10/25/2024||
Capital is a tool for progress not the source.

Newton, Einstein etc. didn’t do neither for nor by the capital.

And we all know that those “freed” people are less likely to gain any capital.

randomdata 10/25/2024||
Some schools would consider their ideas capital. But, regardless, if what they did wasn't useful in the development of capital, nobody would have cared. It is their contributions to capital development that have made them household names.
croes 10/25/2024||
I doubt that the contribution to capital development made Newton and Einstein household names.

When the giant magnetoresistance was discovered it wasn’t useful at first. if all is capital focused we likely miss those discoveries.

randomdata 10/25/2024||
Well, let's put it to the test. What "useless" scientific discovery has given someone a household name in recent years?
croes 10/25/2024||
What useful discovery has given someone a household name in recent years?

Household names are more likely celebrities than scientists.

randomdata 10/25/2024||
Hinton has become a household name, at least here in Canada. But only within recent years as his discoveries have now proven useful in capital creation. Nobody knew who he was in the 1980s when his discoveries were "useless".

Which stands to reason. People only think about you if you do something for them. Newton and Einstein enabled the creation of capital that changed the world, so they are remembered. The guy who discovered something amazing beside them, but that which we still haven't figured out how to use, remains unknown.

croes 10/25/2024||
Maybe we should define household first.

Are we talking about HN households or average Joe‘s

randomdata 10/25/2024|||
Average Canadian Joe, at least.

I don't expect anyone in Zimbabwe knows who he is, just as I cannot name a single person in Zimbabwe despite being sure they too have household names. His capital enablements, while significant, haven't become world changing.

croes 10/25/2024||
Is it because he won the Nobel Prize or more of a longer lasting fame?

But I doubt people could name a winner after he’s out of the news.

Maybe Canada is different.

croes 10/25/2024|||
The point is he is wrong.

AI isn’t better than humans but we still have now less radiologists.

Now imagine it were a free market:

To raise the profits people get replaced by AI but AI underperforms.

The situation would be much worse.

mattlondon 10/25/2024|||
Perhaps in the US where medicine and healthcare are hugely hugely expensive with loads of vested interests and profiteering and fingers-in-pies from every side of the table, sure.

But everywhere else in the developed world that has universal and free healthcare, I can imagine a lot of traction for AI from governments looking to make their health service more efficient and therefore cheaper (plus better for the patient too in terms of shorter wait times etc).

DeepMind has been doing a load of work with the NHS, Royal Free Foundation, Moorfields for instance (although there have been some data protection issues there, but I suspect they are surmountable)

alephnerd 10/25/2024|||
> Perhaps in the US

Medical Systems in countries like the UK [0], Australia [1], and Germany [2] all leverage teleradiology outsourcing at scale, and have done this for decades

[0] - https://www.england.nhs.uk/wp-content/uploads/2021/04/B0030-...

[1] - https://onlinelibrary.wiley.com/doi/abs/10.1111/1754-9485.13...

[2] - https://www.healthcarebusinessinternational.com/german-hospi...

jdbc 10/25/2024||||
No one need/should wait for the US govt or the structurally exploitative private sector to reduce prices.

The US has concocted huge myths about why prices cannot fall no matter what tech or productivity gains happen. Its becomes like religious dogma.

Ppl are already flying overseas in the tens of thousands for medical tourism. All Mexico has to do is set up hospitals along the border so people arent flying elsewhere. This is what will happen in the US. The US has no hope of changing by itself.

bonoboTP 10/25/2024|||
I agree. These developments will be first deployed in countries like India, Mexico, China. If you're an entrepreneur aiming at making a difference through medical AI, it's probably best to focus on places where they honestly want and need efficiency, instead of just talking about it.
oliwarner 10/25/2024|||
What? Radiology is already full of ML and companies offering quick reporting alongside a traditional slow human report. The UKs Royal College of Radiologists has guidance [0] on how to integrate AI and many hospitals use it alongside other outsourced reporting tools.

This works in radiology because it's not surgery. A hallucination doesn't chop off a limb. Past data protection in training, the risk is low, if done right.

AI offers speedy screening, first-pass results or an instant second opinion. You don't have to —and shouldn't yet— rely on it as your only source of truth, but the ship has sailed on hospitals integrating it into their workflow. It's good enough now, and will get better.

0: https://www.rcr.ac.uk/our-services/all-our-publications/clin...

antegamisou 10/25/2024||
Or maybe over-confident arrogant CS majors aren't experts on every other irrelevant subdomain and it's not some absurd absence of meritocracy/bad gatekeepers from preventing them from ruining other fields by letting them rely on everything?
bonoboTP 10/25/2024|||
Yes, but this mostly plays out in ignoring social factors, like who can take on the blame, etc. CS people also underestimate the extreme levels of ego and demigod-like self-image of doctors. They passed a rite of passage that was extremely exhausting and tiring and perhaps even humiliating during residency, so afterwards they are not ready to give up their hard earned self image. Doctor mentality remains the same as in Semmelweis' time.

Good faith is difficult to assume. I do agree that the real world is much more complex than simply which tool works better.

An analogy could be jury-based courts in the US. The reason for having juries and many of the rules are not really evidence-based. It's well known that juries can be biased in endless ways and are very easy to manipulate. Their main purpose though is not to make objectively correct decisions. The purpose is giving legitimacy to a consensus. Similarly, giving diagnosis-power to human doctors is not a question of accuracy. It's a question of acceptability/blame/status.

caeril 10/25/2024|||
Yes, it's the CS majors who are murdering a quarter million Americans annually due to preventable medical errors (including misdiagnoses based on radiology readings), good point.

Nobody, and I mean nobody beats physicians at the Arrogance game.

The very second that ML models begin to consistently beat humans at diagnostics, it would be a moral imperative, as well as an absolute requirement by the terms of the Hippocratic Oath, to replace the humans.

antegamisou 11/3/2024||
> The very second that ML models begin to consistently beat humans at diagnostics, it would be a moral imperative, as well as an absolute requirement by the terms of the Hippocratic Oath, to replace the humans.

Why is HN so brainrot shit at times man wtf. And you're just proving that you're exactly the type of arrogant techbro I was talking about.

Kalanos 10/25/2024||
My cousin is an established radiologist. Since I am well-versed in AI, we talked about this at length.

He says that the automated predictions help free up radiologists to focus on the edge cases and more challenging disease types.

bee_rider 10/25/2024||
It looks like there are 3 possible outcomes: AI will do absolutely nothing, AI will enhance the productivity of radiologists, or AI will render their skills completely obsolete.

In case 1, learning radiology is a fine idea. In case 2, it becomes a little tricky, I guess if you are one of the most skilled radiologists you’ll do quite well for yourself (do the work of 10 and I bet you can take the pay of 2). For case 3, it becomes a bad career choice.

Although, I dunno, it seems like an odd thing to complain about. I mean, the nature of humans is that we make tools—we’re going to automate if it is possible. Rather I think the problem is that we’ve decided that if somebody makes the wrong bet about how their field will go, they should live in misery and deprivation.

gtirloni 10/25/2024||
Scenario 2 is the most likely by far. It's just the continuation of a trend in radiology. Clinics already employ many more "technicians" with basic training and have a "proper" radiologist double checking and signing their work. If you're a technician (or a radiologist NOT supervising technicians), you're in hot water.
hilux 10/26/2024||
Scenario 2 has already arrived - it's just not fully rolled out.
amelius 10/25/2024||
I predict most AI developers will be out of a job soon.

Say you are making a robot that puts nuts and bolts together using DL models. In a few years, Google/OpenAI will have solved this and many other physical tasks, and any client that needs nuts and bolts put together will just buy a GenericRobot that can do that.

Same for radiology-based diagnostics. Soon the mega companies will have bought enormous amounts of data and they will easily put your small radiology-AI company out of business.

Making a tool to do X on a computer? Soon, there will be LLM-based tools that can do __anything__ on a computer and they will just interface with your screen and keyboard directly.

Etc. etc.

Just about anyone working in AI right now is building a castle in someone else's kingdom.

candiddevmike 10/25/2024||
> Just about anyone working in AI right now is building a castle in someone else's kingdom.

Things are innovating so fast that even if you've convinced yourself that you own the kingdom, it seems like you're one research paper or Nvidia iteration away from having to start over.

krisoft 10/25/2024|||
> I predict most AI developers will be out of a job soon.

Once AI developers will be out of a job everyone will be out of every job. If not then the AI developers still have a job to work on those jobs which hasn't been automated yet.

> Just about anyone working in AI right now is building a castle in someone else's kingdom.

This doesn't ring true to me. When do you feel we should circle back to this prediction? Ten years? Fifteen?

Can you formulate your thesis in a form we can verify in the future? (long bets style perhaps)

jvanderbot 10/25/2024|||
> Once AI developers will be out of a job everyone will be out of every job. If not then the AI developers still have a job to work on those jobs which hasn't been automated yet.

Same for software devs: We'll all be working on AI that's working on our problems, but we'll also work on our problems to generate better training data, build out the rest of the automation and verify the AI's solutions, so really AI will be doing all the work and so will we.

amelius 10/25/2024||||
> Once AI developers will be out of a job everyone will be out of every job. If not then the AI developers still have a job to work on those jobs which hasn't been automated yet.

Maybe so, but what if AI developments accelerate and since the big companies have all the data and computational power they put you out of business faster than you can see the next thing the market wants?

krisoft 10/25/2024||
You mean google who seems to be institutionally incapable to pick up anything less than a kings ransom develops a taste for it? :) I will continue worrying more about randomly slipping in the shower and breaking my neck. It feels like a more probable scenairo.
amelius 10/25/2024|||
I thought about it some more.

> Once AI developers will be out of a job everyone will be out of every job. If not then the AI developers still have a job to work on those jobs which hasn't been automated yet.

This may not be true if data is the bottleneck. AI developers may be out of a job long before the data collection has been finished to train the models.

krisoft 10/25/2024||
> AI developers may be out of a job long before the data collection has been finished to train the models.

What do you think AI developers are doing?

hobs 10/25/2024|||
I predict that the current wave of AI will be useful but not nearly as useful as you hope, requiring ever more people to pour their time and energy into the overarching system it creates.
bonoboTP 10/25/2024|||
The general models tackle generic tasks. There is a very long tail of highly specialized tasks that you have never even heard of and the generic AI won't pick up the necessary knowledge by reading books3 and watching YouTube.

Just one example would be material fatigue analysis on CT scans of airliner wings. How much such data will be included in the GPT-6 that addresses such a use case? This is just one example that I've heard about the other day when talking to an ML engineer.

The world consists of so so many diverse things that are not well documented, and the knowledge is closed off in a few companies globally. Things like pharma, physical engineering things, oil and gas, agriculture. There are companies for stuff like muzzle-based cattle identification systems or pig weight estimation from videos etc. Do you think these will become features in Gemini anytime soon?

Most of the world works on very niche things. People who build generic consumer apps have a very distorted view of what's actually making the world run.

cryptonym 10/25/2024||
This is not putting people out of a job, this is changing their jobs. As always, workforce will adapt. Just like all previous tech, this is trying to improve efficiency. LLM can't do "anything" on a computer that's a misunderstanding of the tech but they can already interface with screen/keyboard.

You overestimate the innovations abilities of big tech companies. They'll buy a lot of companies designing robots that are putting nuts and bolts together, or diagnostics, that's a great opportunity.

ben_w 10/25/2024|||
> As always, workforce will adapt

I was never a fan of the term "Singularity" for the AI thing. When mathematical singularities pop up in physics, it's usually a sign the physics is missing something.

Instead, I like to think of the AI "event horizon", the point in the future — always ahead, yet getting ever closer — beyond which you can no longer predict what happens next.

Obviously that will depend on how much attention you pay to the developments in the field (and I've seen software developers surprised by Google Translate having an AR mode a decade after the tech was first demonstrated), but there is an upper limit even for people who obsessively consume all public information about a topic: if you're reading about it when you go to sleep, will you be surprised when you wake up by all the overnight developments?

When that happens, no workforce can possibly adapt.

How close is that? Dunno, but what I can say is that I cannot tell you what to expect 2 months from now, despite following developments in this field as best I can from the outside, and occasionally implementing one or other of the recent-ish AI models myself.

cryptonym 10/25/2024||
You don't ask people to know everything about last tech, they just need to do their job. If a new cool tool benefit is high enough, everybody will know about it and use it. Based on what you said, one could infer Translate AR mode is a nice cool but not something that'll meaningfully change the life of the average joe.

Humanity had been through multiple "event horizon" or whatever you want to call it. With industrialisation, one man with some steam could suddenly do the work of thousands, and it expanded to each and every activity. Multiple "event horizon" later (transport, communication, IT), we are still there with everyone working, differently than previous generations, but still working. We improved a bit reducing child labor but doubled the workforce by including women and we always need more people at work for longer time.

The only constant thing is we keep people at work until we figure out how to burn the last remaining atom. People are far too optimistic believing a new tech will move people away from work.

ben_w 10/25/2024||
> Based on what you said, one could infer Translate AR mode is a nice cool but not something that'll meaningfully change the life of the average joe.

In every case I showed it to people, they were amazed and delighted because it absolutely did matter to them.

This may be a biased sample because of how many of my colleagues (like myself) didn't speak German very well despite being in Berlin, but it was definitely the case that they benefited from me showing them a thing invented a decade earlier.

> Humanity had been through multiple "event horizon" or whatever you want to call it.

No: an event horizon is always ahead of you. That's why I'm choosing that name and rejecting the other.

You can wake up to the news the world has changed unpredictably, or the stuff you thought was scifi turned out to be possible, but it doesn't happen every day, and even less so with your* job in particular.

* generic "you"

More common is e.g.: radio was invented, and almost immediately people started predicting that by 2000 there would be video conference calling — fiction writers can see possible futures coming well before it's possible. Plus some futures that aren't possible, of course.

Radioactivity gets discovered, Victorian fiction talks about it being weaponised. Incorrectly as it happens, but it was foreseeable.

When I was a kid, phone numbers were 6 digits, and "long distance" meant the next town over. It getting better and cheaper was predictable and predicted, as were personal videophones. Even the original iPhone wasn't much of a step change from everything around it, and no single update to the iPhone was either.

Sputnik? Led to Star Trek, which even in TOS had characters using tablet computers that wouldn't be realised for 40-50 years.

AI? The fiction is everything and anything from I Have No Mouth, and I Must Scream to The Culture, via Her, The Terminator, Colossus: The Forbin Project, Short Circuit, and Isaac Asimov, and the attempts to do "real" predictions are even more diverse and include precursors for each scenario and what to expect on the way.

Nobody knows how close any of the speculation is, or how soon it is assuming it's right, or even when specific things such as full self drive is coming (despite being in 80s TV).

We don't know how long it will take between FSD and humanoid robots passing same legal/safety standard (despite everyone already being familiar with the ideas since The Jetsons) or what having them would mean in practice.

We don't even know what we're getting from the AI labs as soon as the US election is over. From what I've heard, neither do they.

moralestapia 10/25/2024|||
I don't know why you're downvoted, you are correct. People just don't like their bubbles being popped.

I do software, have done it for about 20 years. My job has been perpetually "on the verge of becoming obsolete in the next few years", lol. Also, "yeah but as you get older people will not want to hire you" and all that.

Cool stories, *yawn*.

Meanwhile, in reality, my bank account disagrees, and more and more work has been increasingly available for me.

odyssey7 10/25/2024||
Why automate the analysis of an MRI when the bottleneck is that they only have two MRI machines in the whole city, for some reason?
righthand 10/25/2024||
I worked for a start up that was contracted by a German doctor. We built some software for automating identifying lung lesions: a viewer app for each xray, scientific trials software, and an LLM to process data from the trials software. The result was pretty cool and probably would have moved it forward. We even held a few trials to gather data from many doctors in China. Then the doctor didn’t pay his bills and the project was shut down.

This wouldn’t have put radiologists out of work but would have made the process to diagnose much quicker maybe more affordable.

betaby 10/27/2024|
Radiologists are cartel in Canada/USA making ~400-600k/year. Those jobs are protected and will continue to be human-staffed for foreseeable future regardless of AI efficiency.
More comments...