Posted by Hard_Space 10/25/2024
I do agree with the article's author's other premise, radiology was one of those fields that a lot of people (me included) have been expecting to be largely automated, or at least the easy parts, as the author mentions, and that the timelines are moving slower than expected. After all, pigeons perform similarly well to radiologists: https://pmc.ncbi.nlm.nih.gov/articles/PMC4651348/ (not really, but it is basically obligatory to post this article in any radiology themed discussion if you have radiology friends).
Knowing medicine, even when the tech does become "good enough", it will take another decade or two before it becomes the main way of doing things.
That's a hefty assumption, especially if you're including accuracy.
That's exactly what the comment is saying. People see AI do 80% of a task and assume development speed will follow a linear trend and the last 20% will get done relatively quickly. The reality is the last 20% is hard-to-impossible. Prime example is self-driving vehicles, which have been 80% done and 5 years away for the past 15 years. (It actually looks further than 5 years away now that we know throwing more training data at the problem doesn't fix it.)
Take what is 100% complete in one city and do it in another city.
Problem was solved… you just missed the boat.
No, it's because if the promise of certain technologies is reached, it'd be a huge deal. And of course, that promise has been reached for many technologies, and it's indeed been a huge deal. Sometimes less than people imagine, but often more than the naysayers who think it won't have any impact at all.
Extrapolating in a linear fashion, in a few years my child will be ten foot tall, weight six hundred pounds, and speak 17 languages.
The first 90% is the easy part. It's the other 90% that's hard. People forget that, especially people who don't work in software/technology.
The supply of doctors is artificially strapped by the doctor cartel/mafia. There are plenty who want to enter but are prevented by artificial limits in the training.
For example from 2000 to 2018 here's the ratio of per capita health care costs in 2018 to the costs in 2000 for several countries:
2.1 Germany 1.8 France 2.0 Canada 1.7 Italy 2.6 Japan 2.6 UK 2.3 US
Here's cost ratios over several decades compared to 1970 costs for the US, the UK, and France:
1980 1990 2000 2010 2020
US 3.2 8.2 13.9 24.1 36.3
UK 3.1 6.3 15.3 27.8 40.5
FR 3.4 7.6 14.9 21.1 28.5
Here's the same data showing the the cost ratio decade to decade instead of from 1970: 1980 1990 2000 2010 2020
US 3.2 2.6 1.7 1.7 1.5
UK 3.1 2.0 2.4 1.8 1.5
FR 3.4 2.2 2.0 1.4 1.4
My data source was https://data.oecd.org/healthres/health-spending.htm but it looks like data.oecd.org reorganized their site so that redirects to https://www.oecd.org/en/data/indicators/health-spending.html which seems to have the data but with a much more limited interface.Also, I admit that the balloon may in fact never pop, since one theory says that healthcare costs so much simply because it can. It just expands until it costs as much as possible but not more. I'm leaning towards accepting Robin Hanson's signaling-based logic to explain it.
What you're advocating for would be a crime against humanity.
Every four years, the medical industry kills a million Americans via preventable medical errors, roughly one third of which are misdiagnoses that were obvious in hindsight.
If we get to a point at which models are better diagnosticians than humans, even by a small margin, then delaying implementation by even one day will constitute wilful homicide. EVERY SINGLE PERSON standing in the way of implementation will have blood on their hands. From the FDA, to HHS, to the hospital administrators, to the physicians (however such a delay would play out) - every single one of them will be complicit in first-degree murder.
What is behind the curtain is becoming obvious and while there are some gains in some specific areas, the ability for this technology as it stands today to change society is mostly limited to pretending to be a solution for usual shitty human behaviour of cost cutting and reducing workforce. For example IBM laying off people under the guise of AI when actually it was a standard cost cutting exercise with some marketing smeared over the top while the management told the remaining people to pick up the slack. A McKinsey special! And generating content which is to be consumed by people who can't tell the difference between humans and muck.
A fine example is from the mathematics side we are constantly promised huge gains in LLMs but we can't replace a flunked undergrad with anything yet. And this is because it is the wrong tool for the fucking job. Which is the problem in one simple statement: it's mostly the wrong tool for most things.
Still I enjoyed the investment ride! I could model that one with my brain fine.
I spent more time deciphering ChatGPT’s mistakes than programming something in python to sum some integers this week.
Putting a confident timescale on this stuff is like putting a timescale on UFT 50 years ago. A lie.
Oh and lets not forget that we need to start with a conjecture first and we can't get any machine to come up with anything even remotely new there.
There is no existential despair. At all.
The only one lying here is you. To yourself.
By the way, AI can come up with conjectures just fine. I might not be interested in most of them, but then again, I am not interested in most conjectures humans come up with.
What it has done so far for me is put copywriters out of a job. I still find it mostly useful for writing drivel that gets used as page filler or product descriptions for my ecommerce side jobs. Lately the image recognition capabilities lets me generate the kind of stuff I'd never write, which is instagram posts with tons of emojis and other things I'd never have the gumption to do myself but increases engagement. I actually used to use markov chain generator for this going back to 2016 though so the big difference here is that at least it can form more coherent sentences.
So what you're saying is that it's a good bullshit generator for marketing. That is fair :)
There's tools out there to automatically do this with A/B testing and I think even stuff like shopify plugins, but I still do it relatively manually.
I'll use it for code sparingly for stuff like data transformations (Take this ridiculously flat, two table database schema and normalize it to the third normal form) but for straight up generating code to be executed I'm way more careful to make sure it isn't giving me or the people who get to use that code garbage. It isn't that bad for basic security audits actually and I suggest any devs reading this re-run their code through it with security related prompting to see if they missed anything obvious. The huge problem is that at least half of devs with deadlines to meet are apparently not doing this, and I get to see it in the drop in quality of pull requests within the past 5 years.
With respect to other engineers I haven't found any of our ML tools actually pick up anything useful yet from a code review.
On engagement, I saw a negative recently where someone started replacing banner images for events with AI generated content and their conversion to tickets plummeted. YMMV. People can spot it a mile off and they are getting better at it. I think the end game is soon.
Indeed.
I like this stuff, I have fun with it, I have no problem with others making and sharing pics they like just as I myself do… but when I notice it in the wild on packaging or advertising, it always makes me wonder: what other corners are getting cut?
(20 years ago, similar deal but with product art that had 3mm pixels and noticeable jpeg artefacts)
> I think the end game is soon.
Dunno. The models are getting better as well as the consumers noticing more — I've not seen Cronenberg fingers in a while now — and I don't know who will max out first, or when.
"Harrison.rad.1 excels in the same radiology exams taken by human radiologists, as well as in benchmarks against other foundational models.
The Fellowship of the Royal College of Radiologists (FRCR) 2B Rapids exam is considered one of the leading and toughest certifications for radiologists. Only 40-59% of human radiologists pass on their first attempt. Radiologists who re-attempt the exam within a year of passing score an average of 50.88 out of 60 (84.8%).
Harrison.rad.1 scored 51.4 out of 60 (85.67%)."
Better deploy it in fields like agriculture and manufacturing that are already in good part automated. Then later in another generation one can try the medical domain.
Radiology and a lot of other clinical labwork is heavily outsourced already, and has been for decades [0][1].
Much of the analysis and diagnosis is already done by doctors in India and Pakistan before being returned to the hospitals in the West.
In fact, this is how Apollo Group and Fortis Health (two of India and Asia's largest health groups) expanded rapidly in the 2000s.
It's the back office teleradiology firms that are the target customers for Radiology Agents, and in some cases are funding or acquiring startups in the space already.
This has been an ongoing saga for over a decade now.
[0] - https://www.jacr.org/article/S1546-1440(04)00466-1/fulltext
[1] - https://www.reuters.com/article/business/healthcare-pharmace...
I'm not saying automation won't ever happen. But it will need to be slow, as to allow the elite doctor dynasties to recalibrate which specialties to send their kids to, and not to disrupt the already practicing ones. So Hinton's timeline was overly optimistic due to thinking only in terms of the tech. It will happen, but on a longer timescale, maybe a generation or so.
That's not how much of medicine from a business standpoint is run anymore.
Managed Service Organizations (MSOs) and PE consolidation has become the norm for much of the medical industry now because the running a medical practice AND practicing medicine at the same time is hard.
Managing insurance billing, patient records, regulatory paperwork, payroll, etc is an additional 20-30 hours of work on top of practicing as a doctor (which is around 30-50 hours as well).
Due to this, single practitioner clinics or partnership models get sold off and the doctor themselves gets a payout plus gets treated as an employee.
> It will happen, but on a longer timescale, maybe a generation or so
I agree with you that a lot of AI/ML application timelines are overhyped, but in Radiology specifically the transition has already started happening.
The outsourced imaging model has been the norm for almost 30 years now, and most of the players in this space began funding or acquiring startups in this space a decade ago already.
Is 100% automation in the next 5 to 10 years realistic? Absolutely not!
Is 30-50% automation realistic? I'd say so.
But it's not automation.
GP's point is that it will be really hard to take the specialist out of this process, mainly because of regulatory issues.
The customers for these kinds of Radiology Agents are the teleradiology and clinical labwork companies, as well as MSOs and Health Groups looking to cut outside spend by bringing some subset back in-house.
100% automation is unrealistic for a generation, but 30-50% of teleradiology and clinical labwork headcount being replaced by CV applications is absolutely realistic.
A lot of players that entered the telemedicine segment during the COVID pandemic and lockdown have begun the pivot into this space, as well as the new-gen MSOs as well as legacy telemedicine organizations (eg. Apollo, Fortis).
(Also, why are you getting downvoted? You brought up a valid point of contention)
You may always need to keep one or two guys in there but they will be able to do 20x more with the same effort. Definitely plausible.
Interesting choice of example. In most places taxis are/weren't an "open market" either but had various laws preventing newcomers from providing taxi rides. In many places what Uber did was illegal. Various locales either banned Uber, changed the laws to allow them, or simply didn't enforce the laws that made them illegal.
In healthcare you can't take the "asking for forgiveness is easier than permission" route. That's a quick way to jail. In taxiing, it's just some fines that fit the budget.
My overall point isn't to argue whether these disruptions are socially beneficial. I'm trying to point at the more neutral observation that who you're taking on and how much power they have will be crucial to the success of a disruptor. It's not just about "product quality" based on some objective dispassionate politics-free scientific benchmarks.
Furthermore, on the state level there are often "Need Based Laws" that require healthcare providers to obtain approval before establishing or expanding certain healthcare facilities or services. This is unironically designed to prevent unnecessary duplication of healthcare services, control healthcare costs and ensure equitable distribution of healthcare resources. Imagine how expensive it would be if we have two many choices!
The cab cartel was more or less city level.
Add in the fact that "health care" is much more politically charged and its easy to find useful idiots that want to "protect people's health" that enforcement is a lot easier, they're entirely different.
Good thing most people don't know about this, or they might get really angry at the AMA, perhaps even voting for representatives who might enact laws to reform the system.
Fortunately that would never happen.
I suspect a lot of the thinking of replacing humans is driven a lot by the nerd scifi facination with the Singularity, but if a lot of the hype fails to materialize after billions of dollars have been poured into AI there could be an over-correction by the market that would take away funding from even useful AI research.
I hope we'll hear less of AI replacing X, and more of AI enhancing X, where X is radiologists, programmers, artists etc.
But automation is all about reducing labor costs. Even shoddy automation such as self checkout.
I don't see this changing any time soon.
If you still have to do the same job, but better, then you aren't be freed up for a higher level purpose. If we had taken your stance in earlier days of human history we'd all still be out standing in the field. Doing it slightly better perhaps, but still there, "wasting" time.
No, we are clearly better off that almost every job from the past was replaced. And we will be better off when our current jobs are replaced so that our time is freed up so to move onto the next big thing.
You don't think we have reached the pinnacle of human achievement already, surely?
Newton, Einstein etc. didn’t do neither for nor by the capital.
And we all know that those “freed” people are less likely to gain any capital.
When the giant magnetoresistance was discovered it wasn’t useful at first. if all is capital focused we likely miss those discoveries.
Household names are more likely celebrities than scientists.
Which stands to reason. People only think about you if you do something for them. Newton and Einstein enabled the creation of capital that changed the world, so they are remembered. The guy who discovered something amazing beside them, but that which we still haven't figured out how to use, remains unknown.
Are we talking about HN households or average Joe‘s
I don't expect anyone in Zimbabwe knows who he is, just as I cannot name a single person in Zimbabwe despite being sure they too have household names. His capital enablements, while significant, haven't become world changing.
But I doubt people could name a winner after he’s out of the news.
Maybe Canada is different.
AI isn’t better than humans but we still have now less radiologists.
Now imagine it were a free market:
To raise the profits people get replaced by AI but AI underperforms.
The situation would be much worse.
But everywhere else in the developed world that has universal and free healthcare, I can imagine a lot of traction for AI from governments looking to make their health service more efficient and therefore cheaper (plus better for the patient too in terms of shorter wait times etc).
DeepMind has been doing a load of work with the NHS, Royal Free Foundation, Moorfields for instance (although there have been some data protection issues there, but I suspect they are surmountable)
Medical Systems in countries like the UK [0], Australia [1], and Germany [2] all leverage teleradiology outsourcing at scale, and have done this for decades
[0] - https://www.england.nhs.uk/wp-content/uploads/2021/04/B0030-...
[1] - https://onlinelibrary.wiley.com/doi/abs/10.1111/1754-9485.13...
[2] - https://www.healthcarebusinessinternational.com/german-hospi...
The US has concocted huge myths about why prices cannot fall no matter what tech or productivity gains happen. Its becomes like religious dogma.
Ppl are already flying overseas in the tens of thousands for medical tourism. All Mexico has to do is set up hospitals along the border so people arent flying elsewhere. This is what will happen in the US. The US has no hope of changing by itself.
This works in radiology because it's not surgery. A hallucination doesn't chop off a limb. Past data protection in training, the risk is low, if done right.
AI offers speedy screening, first-pass results or an instant second opinion. You don't have to —and shouldn't yet— rely on it as your only source of truth, but the ship has sailed on hospitals integrating it into their workflow. It's good enough now, and will get better.
0: https://www.rcr.ac.uk/our-services/all-our-publications/clin...
Good faith is difficult to assume. I do agree that the real world is much more complex than simply which tool works better.
An analogy could be jury-based courts in the US. The reason for having juries and many of the rules are not really evidence-based. It's well known that juries can be biased in endless ways and are very easy to manipulate. Their main purpose though is not to make objectively correct decisions. The purpose is giving legitimacy to a consensus. Similarly, giving diagnosis-power to human doctors is not a question of accuracy. It's a question of acceptability/blame/status.
Nobody, and I mean nobody beats physicians at the Arrogance game.
The very second that ML models begin to consistently beat humans at diagnostics, it would be a moral imperative, as well as an absolute requirement by the terms of the Hippocratic Oath, to replace the humans.
Why is HN so brainrot shit at times man wtf. And you're just proving that you're exactly the type of arrogant techbro I was talking about.
He says that the automated predictions help free up radiologists to focus on the edge cases and more challenging disease types.
In case 1, learning radiology is a fine idea. In case 2, it becomes a little tricky, I guess if you are one of the most skilled radiologists you’ll do quite well for yourself (do the work of 10 and I bet you can take the pay of 2). For case 3, it becomes a bad career choice.
Although, I dunno, it seems like an odd thing to complain about. I mean, the nature of humans is that we make tools—we’re going to automate if it is possible. Rather I think the problem is that we’ve decided that if somebody makes the wrong bet about how their field will go, they should live in misery and deprivation.
Say you are making a robot that puts nuts and bolts together using DL models. In a few years, Google/OpenAI will have solved this and many other physical tasks, and any client that needs nuts and bolts put together will just buy a GenericRobot that can do that.
Same for radiology-based diagnostics. Soon the mega companies will have bought enormous amounts of data and they will easily put your small radiology-AI company out of business.
Making a tool to do X on a computer? Soon, there will be LLM-based tools that can do __anything__ on a computer and they will just interface with your screen and keyboard directly.
Etc. etc.
Just about anyone working in AI right now is building a castle in someone else's kingdom.
Things are innovating so fast that even if you've convinced yourself that you own the kingdom, it seems like you're one research paper or Nvidia iteration away from having to start over.
Once AI developers will be out of a job everyone will be out of every job. If not then the AI developers still have a job to work on those jobs which hasn't been automated yet.
> Just about anyone working in AI right now is building a castle in someone else's kingdom.
This doesn't ring true to me. When do you feel we should circle back to this prediction? Ten years? Fifteen?
Can you formulate your thesis in a form we can verify in the future? (long bets style perhaps)
Same for software devs: We'll all be working on AI that's working on our problems, but we'll also work on our problems to generate better training data, build out the rest of the automation and verify the AI's solutions, so really AI will be doing all the work and so will we.
Maybe so, but what if AI developments accelerate and since the big companies have all the data and computational power they put you out of business faster than you can see the next thing the market wants?
> Once AI developers will be out of a job everyone will be out of every job. If not then the AI developers still have a job to work on those jobs which hasn't been automated yet.
This may not be true if data is the bottleneck. AI developers may be out of a job long before the data collection has been finished to train the models.
What do you think AI developers are doing?
Just one example would be material fatigue analysis on CT scans of airliner wings. How much such data will be included in the GPT-6 that addresses such a use case? This is just one example that I've heard about the other day when talking to an ML engineer.
The world consists of so so many diverse things that are not well documented, and the knowledge is closed off in a few companies globally. Things like pharma, physical engineering things, oil and gas, agriculture. There are companies for stuff like muzzle-based cattle identification systems or pig weight estimation from videos etc. Do you think these will become features in Gemini anytime soon?
Most of the world works on very niche things. People who build generic consumer apps have a very distorted view of what's actually making the world run.
You overestimate the innovations abilities of big tech companies. They'll buy a lot of companies designing robots that are putting nuts and bolts together, or diagnostics, that's a great opportunity.
I was never a fan of the term "Singularity" for the AI thing. When mathematical singularities pop up in physics, it's usually a sign the physics is missing something.
Instead, I like to think of the AI "event horizon", the point in the future — always ahead, yet getting ever closer — beyond which you can no longer predict what happens next.
Obviously that will depend on how much attention you pay to the developments in the field (and I've seen software developers surprised by Google Translate having an AR mode a decade after the tech was first demonstrated), but there is an upper limit even for people who obsessively consume all public information about a topic: if you're reading about it when you go to sleep, will you be surprised when you wake up by all the overnight developments?
When that happens, no workforce can possibly adapt.
How close is that? Dunno, but what I can say is that I cannot tell you what to expect 2 months from now, despite following developments in this field as best I can from the outside, and occasionally implementing one or other of the recent-ish AI models myself.
Humanity had been through multiple "event horizon" or whatever you want to call it. With industrialisation, one man with some steam could suddenly do the work of thousands, and it expanded to each and every activity. Multiple "event horizon" later (transport, communication, IT), we are still there with everyone working, differently than previous generations, but still working. We improved a bit reducing child labor but doubled the workforce by including women and we always need more people at work for longer time.
The only constant thing is we keep people at work until we figure out how to burn the last remaining atom. People are far too optimistic believing a new tech will move people away from work.
In every case I showed it to people, they were amazed and delighted because it absolutely did matter to them.
This may be a biased sample because of how many of my colleagues (like myself) didn't speak German very well despite being in Berlin, but it was definitely the case that they benefited from me showing them a thing invented a decade earlier.
> Humanity had been through multiple "event horizon" or whatever you want to call it.
No: an event horizon is always ahead of you. That's why I'm choosing that name and rejecting the other.
You can wake up to the news the world has changed unpredictably, or the stuff you thought was scifi turned out to be possible, but it doesn't happen every day, and even less so with your* job in particular.
* generic "you"
More common is e.g.: radio was invented, and almost immediately people started predicting that by 2000 there would be video conference calling — fiction writers can see possible futures coming well before it's possible. Plus some futures that aren't possible, of course.
Radioactivity gets discovered, Victorian fiction talks about it being weaponised. Incorrectly as it happens, but it was foreseeable.
When I was a kid, phone numbers were 6 digits, and "long distance" meant the next town over. It getting better and cheaper was predictable and predicted, as were personal videophones. Even the original iPhone wasn't much of a step change from everything around it, and no single update to the iPhone was either.
Sputnik? Led to Star Trek, which even in TOS had characters using tablet computers that wouldn't be realised for 40-50 years.
AI? The fiction is everything and anything from I Have No Mouth, and I Must Scream to The Culture, via Her, The Terminator, Colossus: The Forbin Project, Short Circuit, and Isaac Asimov, and the attempts to do "real" predictions are even more diverse and include precursors for each scenario and what to expect on the way.
Nobody knows how close any of the speculation is, or how soon it is assuming it's right, or even when specific things such as full self drive is coming (despite being in 80s TV).
We don't know how long it will take between FSD and humanoid robots passing same legal/safety standard (despite everyone already being familiar with the ideas since The Jetsons) or what having them would mean in practice.
We don't even know what we're getting from the AI labs as soon as the US election is over. From what I've heard, neither do they.
I do software, have done it for about 20 years. My job has been perpetually "on the verge of becoming obsolete in the next few years", lol. Also, "yeah but as you get older people will not want to hire you" and all that.
Cool stories, *yawn*.
Meanwhile, in reality, my bank account disagrees, and more and more work has been increasingly available for me.
This wouldn’t have put radiologists out of work but would have made the process to diagnose much quicker maybe more affordable.