Top
Best
New

Posted by sungam 6 days ago

Show HN: I'm a dermatologist and I vibe coded a skin cancer learning app(molecheck.info)
Coded using Gemini Pro 2.5 (free version) in about 2-3 hours.

Single file including all html/js/css, Vanilla JS, no backend, scores persisted with localStorage.

Deployed using ubuntu/apache2/python/flask on a £5 Digital Ocean server (but could have been hosted on a static hosting provider as it's just a single page with no backend).

Images / metadata stored in an AWS S3 bucket.

426 points | 258 comments
jmull 6 days ago|
I kind of love the diy aspect of ai coding.

A dermatologist a short while ago with this idea would have to find a willing and able partner to do a bunch of work -- meaning that most likely it would just remain an idea.

This isn't just for non-tech people either -- I have a decades long list of ideas I'd like to work on but simply do not have time for. So now I'm cranking up the ol' AI agents an seeing what I can do about it.

Waterluvian 5 days ago||
I feel like the name “vibe code” is really the only issue I have. Enabling everyone to program computers to do useful things is very very good.
sollewitt 5 days ago|||
It captures not understanding what you’re doing crossed with limited AI understanding which means the whole thing is running on vibes.
jader201 5 days ago||
I still wish a better name had been coined/had stuck.

It’s hard to take the name “vibe coding” seriously, and maybe that was the whole point, but I feel like AI coding is a bit more serious than the name “vibe coding” implies.

Anyone that disagrees that it should be taken more seriously can surely at least agree that it’s likely it will cross that threshold in the not too distant future, yet we’re still going to be stuck with the silly name.

krapp 5 days ago||
It is the perfect name for an industry that considers "enshittification" a serious term of art.

And I say that knowing it will absolutely rule everything in the future - I'd bet at last half of all Show HNs are vibe coded apps now. Not long ago tech was seriously talking about monkey JPEGS being the future of global commerce and finance. We've been living in unserious times for a while.

I'd feel better about vibe coding and AI in general if I thought it would lead to more people learning how to do what it enables for themselves, and actually exercise control over their devices and creativity. But as useful as it can be - and I have to concede that much at this point - it requires depending on centralized AI services and isn't much better than proprietary code in terms of defending end user rights. I fear AI driven everything will lead to more closed systems and more corporate commoditization of our data and our lives. Unfortunately from what I've seen not only do many vibe coders not care, they don't want to care and they think anyone who does care is a slope-headed neanderthal.

So yeah, call it what it is. OP's app would have just been a simple web app ten years ago, it's just a quiz, doesn't require any deep coding magic. But no one cares about anything but the vibe anymore.

AuthAuth 5 days ago||||
I wish that computers were designed in a way that pushed the users to script more. Its such a powerful ability that would benefit almost every worker.
somenameforme 5 days ago|||
This has often been tried. SQL, for instance, was specifically designed to feel like natural language and be useable by people with minimal technical background. But it always runs into the same problem. As you start to expand the capabilities of these scripting languages and you get into the nitty gritty reality of what programming genuinely involves, they always end up being just really verbose and awkward to use languages that are, otherwise, like any other programming language.

Even worse is the tendency for scripting languages tend to try to be robust against errors, so you end up with programs that are filled with extremely subtle nuance in things like their syntax parsing which, in many ways, makes them substantially more complex than languages with extremely strict syntactic enforcement.

AuthAuth 4 days ago|||
Ah so you're saying we should remove error handling and let the users feel the consequences of their actions.
somenameforme 4 days ago||
The users are already feeling it, but may have trouble understanding why! The reason strongly typed languages with rigid syntax are easier is because it's much more difficult to accidentally do things like check if 3 is greater than true.
Waterluvian 5 days ago||||
Apple has always been pretty good at this. AppleScript, Automator, Shortcuts. I did all kinds of cool stuff in OSX 10.4 back before I wrote any traditional code.
mbreese 5 days ago|||
Before that was HyperCard. It was always amazing to me the types of applications that could be written with HyperCard.

In a similar way, VBA was amazing in MS Office back in the day. If you ever saw someone who was good at Visual Basic in Excel, it’s impressive the amount of work that could get done in Excel by a motivated user who would have been hesitant to call themselves a programmer.

cik 5 days ago||
I wrote, and sold my first piece of software in HyperCard. It was a pretty lame Choose Your Own Adventure style game, where you clicked on buttons, having read the text. 7 year old me was pretty chuffed, to buy some baseball cards out of his hobby. I really, really miss that world.
sleepybrett 5 days ago|||
Applesoft Basic
worldsayshi 5 days ago|||
Workers are over specialized. And our business domain models are rigid. We want to streamline and standardize which often means that code is written in few places.

It would be nice if we could have the cake and eat it here. With LLM:s there's certainly opportunities if we can find ways to allow both custom scripting and large scale domain constrained logic.

notTooFarGone 5 days ago||||
The only issue is security. The amount of open endpoints, standard logins and stuff will get out of control.
vrighter 5 days ago|||
but they're not programming computers. They're commissioning footgun-riddled software from a junior intern
jstummbillig 5 days ago|||
People have the grandest ideas about the quality of the average piece of software existing in the real world.
vrighter 5 days ago||
for your own use you can use whatever crap you have a machine come up with for you.

For use on others, no. It's not about just the quality, it's about not even knowing what you're selling.

PUSH_AX 5 days ago|||
What is the end goal of software? The vast majority of engineers seem to believe the goal is for the software to be perfect, when actually it's to do things like catch cancer early or make money. Do you think a person who’s life was saved by software with footguns cares?

Lose the tunnel vision.

vrighter 5 days ago||
They are free to use them for themselves. But to use these apps on others can be life threathening in cases. And if not it's still unethical to sell such software when they are literally unable to describe what it can and cannot do.
farai89 5 days ago|||
I believe this captures it well. There are many people that would have previously needed to hire dev shops to get their ideas out and now they can just get them done faster. I believe the impact will be larger in non-tech sectors.
NitpickLawyer 5 days ago|||
Right. And what a lot of folks here miss is that the prototype was always bad. This process only speeds up the MVP, and gives the idea person a faster way to validate an idea.

Focusing on "but security lol" is a bad take, IMO. Every early attempt is bad at something. Be it security, or scale, or any number of problems. Validating early is good. Giving non-tech people a chance is good. If an idea is worth pursuing, you can always redo it with "experts". But you can't afford experts (hell, you can't even afford amateurs) for every idea you want put into an MVP.

gentooflux 5 days ago||
There's a big difference between a "prototype" (or a POC, or a spike, or whatever your company calls it), and an "MVP" (minimum viable product). An insecure product is not viable. A product which cannot be extended or maintained without being almost competitively rewritten is not viable.

MVP means just enough engineered code to solve a problem, rough around the edges and lacking features sure, but not built by someone who has literally no idea what they were doing.

Prototypes of physical products are never put into production and sold to consumers. Unfortunately software prototypes "run", and are sold at that point. Then they begin to scale, and the inherent flaws in their design are amplified. The same thing used to happen with MS Access apps; the same thing still happens with "low code" solutions.

The engineers cost just as much after the prototype phase, but if you don't hire them to build your MVP then you never have one.

NitpickLawyer 5 days ago||
Yeah, no. Every MVP I've ever seen has been riddled with problems. Hell, even publicly launched projects are a mess most of the times. How many social networks we've had in the past 5 years that were pwned right after launch? I remember at least 4 or 5 very public failures (firebase tokens, client-side apis and so on). Those are just the most public ones.

Everyone wants to pretend that the software used to be better, but the reality is that MVPs and sometimes even public launches were always a house of cards.

gentooflux 5 days ago||
You are pointing to the same low code/no code prototypes that I am, but you keep calling them MVPs for some reason. There's no "used to be better" here, there is good and bad software full stop.
utyop22 5 days ago|||
Most ideas suck and never deserve to see the light of day.

True productivity is when what is produced is of benefit.

justin 5 days ago|||
Why don’t they deserve to see the light of day? Maybe the market gets to decide what “sucks” or doesn’t. More ideas in the marketplace gives users more choice.
__MatrixMan__ 5 days ago|||
Different people have different ideas about what counts as benefit.

The only kind of productivity is progress toward somebody's arbitrary goals. There's nothing "true" about it.

jmkni 6 days ago|||
Same, I've had ideas rattling around in my brain for years which I've just never executed on, because I'm 'pretty sure' they won't work and it's not been worth the effort

I've been coding professionally for ~20 years now, so it's not that I don't know what to do, it's just a time sink

Now I'm blasting through them with AI and getting them out there just in case

They're a bit crap, but better than not existing at all, you never know

ecocentrik 5 days ago|||
I'm a big fan of barriers to entry and using effort as a filter for good work. This derma app could be so much better if it actually taught laypeople to identify the difference between carcinomas, melanomas and non-cancerous moles instead of just being a fixed loop quiz.
ptero 5 days ago|||
IMO it is better to keep the barriers to entry as low as possible for prototyping. Letting domain experts build what they have in mind themselves, on a shoestring, is a powerful ability.

Most such prototypes get tossed because of a flaw in the idea, not because they lacked professional software help. If something clicks the prototype can get rebuilt properly. Raising the barriers to entry means significantly fewer things get tried. My 2c.

bluefirebrand 5 days ago||
> IMO it is better to keep the barriers to entry as low as possible for prototyping

Not in an industry where prototypes very often get thrown into production because decision makers don't know anything about the value of good tech, security, etc

goosejuice 5 days ago||
That's completely fine for most software.
vrighter 5 days ago||
it most definitely is not.
goosejuice 4 days ago||
It's perfectly fine for most MVPs to go into production. Most SaaS software is solved. Prototypes are outsourcing the hard parts around security. The hard part is making a sale and finding the right fit. Spending 4x the cost on a product that never makes a sale is bad economics. This app isn't remotely harmful, so do you care to make an argument for why it shouldn't exist?

Should decision makers be more informed? Yes, of course, but that's not an argument for gatekeeping. We shouldn't be gatekeeping software or the web. Not through licensure or some arbitrary meaning of "effort". That will do nothing but stifle job growth and I'd very much like to keep developers employed.

AlecSchueler 5 days ago||||
Same here, that's why I only ever code in assembly and recommend everyone else to do the same.
jmkni 5 days ago|||
Well I mean more low-brow stuff like "Pint?", a social media app to find other people to go for a pint with :)
citizenpaul 5 days ago||||
>They're a bit crap, but better than not existing at all, you never know

I don't agree. I think because of llm/vibe coding my random ideas I've actually wasted more time then if I did them manually. The vibe code as you said is often crap and often after I spend a lot of time on it. Realize that there are countless subtle errors that mean its not actually doing what I was intending at all. I've learned nothing and made a pointless app that does not even do anything but looks like it does.

Thats the big allure that has been keeping "AI" hype floating. It always seems so dang close to being a magic wand. Then upon time spent reviewing and a critical eye you realize it has been tricking you like a janitor that is just sweeping dirt under the rug.

At this point I've relegated LLM to advanced find replace and Formatted data structuring(Take this list make it into JSON) and that's about it. There are basically tools that do everything else llms do that already exist and do it better.

I can't count at this point how many times "AI" has taken some sort of logic I want then makes a bunch of complex looking stuff that takes forever to review and I find out it fudged the logic to simply always be true/false when its not even a boolean problem.

anthonypasq96 5 days ago||
brother, no one cares. if LLMs made something exist that did not exist previously, they worked. it doesnt matter if you could have done it faster by hand if doing so would have resulted in the program not existing.
citizenpaul 5 days ago||
To anyone wondering if their are LLM paid shills on HN here is proof: A less than 30 day old account which only comment is a nonsense praise of LLM against legitimate criticism.

user: anthonypasq96 created: 22 days ago karma: 2 about: submissions comments favorites

anthonypasq96 4 days ago||
why are people online obsessed with the idea that anyone who disagrees with them is a paid actor
vrighter 5 days ago|||
well yeah, better not existing at all actually, if they're crap and you're ok with that. Those just serve to pad out your resume for nontechnical people. It's not like you're actually learning much if you couldn't be bothered to even remove the crap parts
jmkni 5 days ago||
My Resume has plenty of padding already and it's not about learning, it's about "maybe this random idea might actually work" and proving out that concept
sungam 6 days ago|||
Yes I agree - I could probably have worked out how to do it myself but it would have taken weeks and realistically I would never have had the time to finish it.
amelius 6 days ago|||
Well, image classification tasks don't require coding at all.

You just need one program that can read the training data, train a model, and then do the classification based on input images from the user.

This works for basically any kind of image, whether it's dogs/cats or skin cancer.

chaps 6 days ago||
...none of this requires coding?
amelius 6 days ago||
No additional coding.

You can take the code from a dog/cat classifier and use it for anything.

You only need to change the training data.

chaps 6 days ago|||
I've done enough image classification stuff that, nah. If all you care about is high level confirmation with high error rates, sure. But more complex tasks like, "Are these two documents the same?" are much, much harder and the failure modes are subtle.
amelius 6 days ago||
I think most experts wouldn't approach this problem as an image classification problem ...

And, more importantly, I don't think you'll see good results either from a vibe-coded solution.

So I don't think your comment makes sense here.

jacquesm 5 days ago|||
> I think most experts wouldn't approach this problem as an image classification problem ...

Indeed. It is first and foremost a statistics and net patient outcomes problem.

The image classification bit - to the best of the current algorithms abilities - is essentially a solved problem (even if it isn't quite that simple), and when better models become available you plug those in instead. There is no innovation there.

The hard part is the rest of it. And without a good grounding in medical ethics and statistics that's going to be very difficult to get right.

chaps 5 days ago|||
It's a problem that has many image classification components to it.

"Vibe coding" does a surprisingly good job at this problem.

Yes it does. :)

amelius 5 days ago||
Maybe but you have broadened the scope from a simple image classification problem to a pipeline of multiple image classifications steps.
chaps 5 days ago||
Friend, we're talking about classifying skin cancer. The topic is already quite broad.
amelius 5 days ago||
I think it is a pointless discussion because at some level we are both right.

I'm not going to argue with the idea that a pre-made classifier can be improved upon by experts.

But pre-made classifiers exist and are useful for a very large variety of tasks. This was the original point.

runako 6 days ago|||
> No additional coding.

> You can take the code from

https://xkcd.com/2501/

More seriously, for most non-programmers, even typing into a console is "coding."

growingkittens 6 days ago||
I am a "noncoder" because of a number of reasons. My best friend is a "coder" and still starts instructions with "It's easy! Just open the terminal...".

Unfortunately, I do advanced knowledge work, and the tools I need technically often exist...if you're a coder.

Coding is not that accessible. The intermediary mental models and path to experience required to understand a coding task are not available to the average person.

asadotzler 5 days ago|||
[flagged]
bitmasher9 5 days ago|||
This is not a healthcare app, it’s a health education app. This app will never have PII, or be used for treatment/diagnosis. If it goes down tomorrow it will have zero impact on anyone’s healthcare.
tptacek 5 days ago|||
It's not OK to talk this way about people's Show HN projects; this is in the guidelines.
yread 6 days ago||
Why? I know tons of coding MDs. Pathologist hacking the original Prince and adding mods also just in assembly. Molecular pathologist organizing their own pipelines and ETLs.

Lots of people like computers but earn a living doing something else

jonahx 6 days ago||
He wasn't saying no coding MDs existed. Just that, generally speaking, most MDs would have had to partner with a technical person, which is true. And is now less true than it was before.
jjallen 6 days ago||
Very cool. I learned a lot as a non dermatologist but someone with a sister who has had melanoma at a very young age.

I went from 50% to 85% very quickly. And that’s because most of them are skin cancer and that was easy to learn.

So my only advice would be to make closer to 50% actually skin cancer.

Although maybe you want to focus on the bad ones and get people to learn those more.

This was way harder than I thought this detection would be. Makes me want to go to a dermatologist.

sungam 6 days ago||
Thanks, this is a good point - I think a 50:50 balance of cancer versus harmless lesions would be better and will change this in a future version.

Of course in reality the vast majority of skin lesions and moles are harmless and the challenge is identifying those that are not and I think that even a short period of focused training like this can help the average person to identify a concerning lesion.

wizzwizz4 5 days ago||
https://xkcd.com/2501/
alanfranz 5 days ago|||
> So my only advice would be to make closer to 50% actually skin cancer.

If I were to code this for "real training" of a dermatologist, I'd make this closer to "real world" training rate. As a dermatologist, I'll imagine that probably just 1 out of 100 (or something like that) skin lesions that people could imagine are cancerous, actually are so.

With the current dataset, there're just too many cancerous images. This makes it kind of easy to just flag something as "cancerous" and still retain a good "score" - but the point is moot, if as a dermatologist you send _too many_ people without cancer to do further exams, then you're negating the usefulness of what you're doing.

mewpmewp2 5 days ago||
It needs a specific scoring system where each false positive has a lower score drop, but false negative has a huge one. At the same time like you said positives would be much rarer. Should be easy to ask LLM to vibe code that so it would simulate real world and its consequences.
jjallen 5 days ago|||
Thought about this some more. I think you want to start at 100% or high so people actually learn what needs to be learned: what malignant skin conditions actually look like.

And then once they have learned you get progressively harder and harder. Basically the closer to 50% you are the harder it will be to have a score higher than chance/50%.

loeg 5 days ago|||
I found the first dozen to be mostly cancer and then the next dozen were mostly non-cancer. (Not sure if it's randomized.) (Also, I'm really bad at identifying cancerous vs non-cancerous skin lesions.)
sungam 5 days ago||
It is randomized so probably just bad luck! FWIW I get a high score and another skin cancer doctor who commented also gets a high score so it is possible to make the diagnosis in most cases on the basis of these images.
bigbacaloa 5 days ago||
[dead]
vindex10 6 days ago||
Hi! That's really useful tool!

I wish it also explained the decision making process, how to understand from the picture what is the right answer.

I'm really getting lost between melanoma and seborrheic keratosis / nevus.

I went through ~120 pictures, but couldn't learn to distinguish those.

Also, the guide in the burger menu leads to a page that doesn't exist: https://molecheck.info/how-to-recognise-skin-cancer

sungam 6 days ago||
This is very helpful feedback. I will add some more information to help with the diagnosis and add an article in the burger menu with detailed explanation.

Being honest I didn't expect anyone apart from a few of may patients to use the app and certainly did not expect front page HN!

jgilias 5 days ago||
Hey!

Thanks for making this! A bit more polish and this is something I’d make sure everyone in my family has played with.

Imagine a world where every third person is able to recognise worrying skin lesions early on.

addandsubtract 5 days ago|||
I'm not a doctor, but there's an ABCDE[0] rule of thumb to spot signs of skin cancer:

Asymmetry: One half of the spot is unlike the other half.

Border: The spot has an irregular, scalloped, or poorly defined border.

Color: The spot has varying colors from one area to the next

Diameter: melanomas are usually greater than 6 millimeters, or about the size of a pencil eraser

Evolving: Changing in size, shape, color, or new symptoms (itching, bleeding)

[0] https://www.aad.org/public/diseases/skin-cancer/find/at-risk...

jgilias 5 days ago||
Also came to the same conclusion. I want a mode where 50% of the set are melanomas, and the other 50% are “brown benign things”.
sungam 5 days ago||
Will add this in next version!
lukko 6 days ago||
I'm a doctor too and would love to hear more about the rationale and process for creating this.

It's quite interesting to have a binary distinction: 'concerned vs not concerned', which I guess would be more relevant for referring clinicians, rather than getting an actual diagnosis. Whereas naming multiple choice 'BCC vs melanoma' would be more of a learning tool useful for medical students..

Echoing the other comments, but it would be interesting to match the cards to the actual incidence in the population or in primary care - although it may be a lot more boring with the amount of harmless naevi!

sungam 6 days ago|
Thanks for your comment. The main motivation for me in developing the app was that lots of my patients wanted me to guide them to a resource that can help them improve their ability to recognise skin cancer and, in my view, a good way to learn is to be forced to make a decision an then receive feedback on that decision.

For the patient I think the decision actually is binary - either (i) I contact a doctor about this skin lesion now or (ii) I wait for a bit to see what happens or do nothing. In reality most skin cancers are very obvious even to a non-expert and the reason they are missed are that patients are not checking their skin or have no idea what to look for.

I think you are right about the incidence - would be better to be a more balanced distribution of benign versus malignant, but I don't think it would be good to just show 99% harmless moles and 1% cancers (which is probably the accurate representation of skin lesions in primary care) since it would take too long for patients to learn the appearance of skin cancer.

jazoom 5 days ago||
> most skin cancers are very obvious even to a non-expert and the reason they are missed are that patients are not checking their skin or have no idea what to look for

I am a skin cancer doctor in Queensland and all I do is find and remove skin cancers (find between 10 and 30 every day). In my experience the vast majority of cancers I find are not obvious to other doctors (not even seen by them), let alone obvious to the patient. Most of what I find are BCCs, which are usually very subtle when they are small. Even when I point them out to the patient they still can't see them.

Also, almost all melanomas I find were not noticed by the patient and they're usually a little surprised about the one I point to.

In my experience the only skin cancers routinely noticed by patients are SCCs and Merkel cell carcinomas.

With respect, if "most skin cancers are very obvious even to a non-expert" I suggest the experts are missing them and letting them get larger than necessary.

I realise things will be different in other parts of the world and my location allows a lot more practice than most doctors would get.

Update: I like the quiz. Nice work! In case anyone is wondering, I only got 27/30. Distinguishing between naevus and melanoma without a dermatoscope on it is sometimes impossible. Get your skin checked.

sungam 5 days ago||
Thanks for your kind words with regards to the app and well done for getting such a high score!. I agree that BCC is often subtle. My practice is also largely focused on skin cancer. I would say that the majority of melanomas (and SCCs) that I diagnose would be obvious to a patient that underwent a short period of focused training and checked their skin regularly. A possible explanation for the difference in our experience is that the incidence of skin cancer (and also atypical but benign moles) a lot higher in Australia than in the UK.
jazoom 5 days ago||
There would be quite the difference in our patient demographics.

I have quite a few patients from the UK who have had several skin cancers. Invariably they went on holidays to Italy or Spain as a child and soaked up the sun.

Keep up the great work.

jacquesm 5 days ago||
Nice job. Now you really need to study up on the statistics behind this and you'll quickly come to the conclusion that this was the easy part. What to do with the output is the hard part. I've seen a start-up that made their bread and butter on such classifications, they did an absolutely great job of it but found the the problem of deciding what to do with such an application without ending up with net negative patient outcomes to be far, far harder than the classification problem itself. The error rates, no matter how low, are going to be your main challenge, both false positives and false negatives can be extremely expensive, both in terms of finance and in terms of emotion.
sungam 5 days ago|
Thanks for your comment - the purpose of this app is patient education rather than diagnosis but I will definitely have a look at the relevant stats in more detail!
jacquesm 5 days ago|||
The risk I think is that people will not understand that that is your goal, instead they will use it to help them diagnose something they might think is suspicious.

They will go through your images until they get a good score, believe themselves and expert and proceed to diagnose themselves (and their friends).

By the time you have an image set that is representative and that will actually educate people to the point where they know what to do and what not to do you've created a whole raft of amateur dermatologists. And the result of that will be that a lot of people are going to knock on the doors of real dermatologists who might tell them not to worry about something when they are now primed to argue with them.

I've seen this pattern before with self diagnosis.

nextaccountic 5 days ago||
So what? Are you arguing that ensuring patients have less information available about diseases leads to better outcomes? What's your take on public campaigns about self diagnosing mammary cancer by touch? (very common where I live)

As a patient I'd rather have more information available to me, even if I ultimately defer to specialists

Also it's common for medical professionals to ignore symptoms of certain demographics - in those cases, enabling patients to advocate for themselves is essential https://www.nytimes.com/2022/07/29/well/mind/medical-gasligh...

A personal anecdote of mine was a friend that had abdominal pain for months. She had some comorbidities that made it easier for doctors to dismiss her symptoms. After visits to various doctors she only got adequate treatment because I went with her and advocated for her. After discarding multiple options eventually a renal infection was diagnosed and treated. If she went with the opinion of the first doctor she would still have the underlying condition untreated.

thebeardisred 5 days ago|||
To that end I quickly learned something that AI models would as well (which isn't your intention):

Pictures with purple circles (e.g. faded pen ink on light skin outlining the area of concern) are a strong indicator of cancer. :wink:

DrewADesign 6 days ago||
This is awesome. Great use of AI to realize an idea. Subject matter experts making educational tools is one of the most hopeful things to come out of AI.

It’s just a bummer that it’s far more frequently used to pump wealth to tech investors from the entire class of people that have been creating things on the internet for the past couple of decades, and that projects like this fuel the “why do you oppose fighting cancer” sort of counter arguments against that.

jacquesm 5 days ago||
On the contrary. There is a whole raft of start-ups around this idea and other related ones. And almost all of them have found the technical challenges manageable, and the medical and ethical challenges formidable.
DrewADesign 5 days ago||
I’m not exactly sure what in my comment you’re responding to, here: My appreciation that a subject matter expert is now capable of creating a tool to share their knowledge, that tech investors are using AI to siphon money from people that actually make things, or that good projects like this are used to justify that siphoning?
jacquesm 5 days ago||
You wrote:

"This is awesome. Great use of AI to realize an idea. Subject matter experts making educational tools is one of the most hopeful things to come out of AI.

It’s just a bummer that it’s far more frequently used to pump wealth to tech investors from the entire class of people that have been creating things on the internet for the past couple of decades, and that projects like this fuel the “why do you oppose fighting cancer” sort of counter arguments against that."

Let's take that bit by bit then if you find it hard to correlate.

> This is awesome.

Agreed, it is a very neat demonstration of what you can do with domain knowledge married to powerful technology.

> Great use of AI to realize an idea.

This idea, while a good one, is not at all novel and does not require vibe coding or LLMs in any way, but it does rely on a lot of progress in image classification in the last decade or so if you want to take it to the next level. Just training people on a limited set of images is not going to do much of anything other than to inject noise into the system.

> Subject matter experts making educational tools is one of the most hopeful things to come out of AI.

Well.. yes and no. It is a hopeful thing but it doesn't really help when releasing it bypasses the whole review system that we have in place for classifying medical devices. And make no mistake: this is a medical diagnostic device and it will be used by people as such even if it wasn't intended as such. There is a fair chance that the program - vibe coded, remember? - has not been reviewed and tested to the degree that a medical device normally would be and that there has been no extensive testing in the field to determine what the effect on patient outcomes of such an education program is. This is a difficult and tricky topic which ultimately boils down to a long - and possibly expensive - path on the road to being able to release such a thing responsibly.

> It’s just a bummer that it’s far more frequently used to pump wealth to tech investors from the entire class of people that have been creating things on the internet for the past couple of decades

As I wrote, I'm familiar with quite a few startups in this domain. Education and image classification + medical domain knowledge is - and was - investable and has been for a long time. But it is not a simple proposition.

> and that projects like this fuel the “why do you oppose fighting cancer” sort of counter arguments against that.

Hardly anybody that I'm aware of - besides the Trump administration - currently opposes fighting cancer, there are veritable armies of scientists in academia and outside of it doing just that. This particular kind of cancer is low hanging fruit because (1) it is externally visible and (2) there is a fair amount of training data available already. But even with those advantages the hard problems, statistics, and ultimately the net balance in patient outcomes if you start using the tool at scale are where the harsh reality sets in: solving this problem for the 80% of easy to classify cases is easy by definition. The remaining 20% are hard, even for experts, more so for a piece of software or a person trained by a piece of software. Even a percentage point or two shift in the confusion matrix can turn a potentially useful tool into a useless one or vice versa.

That's the problem that people are trying to solve, not the image classification basics and/or patient education, no matter how useful these are when used in conjunction with proper medical processes.

But props to the author for building it and releasing it, I'm pretty curious about what the long term effect of this is, I will definitely be following the effort.

Better like that?

pojzon 5 days ago|||
I hope at some point AI will replace most diagnostics and doctors that are not up to date.

I also hope it will completely kill US pharmacy conglomerate.

AI was trained on public domain knowledge. All things we get from it should be free and available everywhere.

I can only hope.

jacquesm 5 days ago||
> I hope at some point AI will replace most diagnostics and doctors that are not up to date.

That's a valid hope, but not a very realistic one just yet. The error rates are just too high. Medicine is messy and complex. Yes, doctors get it wrong every now and then. But AI gets it wrong far more frequently, still. It can be used as a tool in the arsenal of the medical professional, but we are very far away from self-service diagnosis for complex stuff.

> I also hope it will completely kill US pharmacy conglomerate.

That is mostly based on molecules and patents, not so much on diagnostics, that's a different group of companies.

> AI was trained on public domain knowledge. All things we get from it should be free and available everywhere.

Not necessarily, but for the cases where it is I agree that the models should be free and open.

> I can only hope.

Yes. I've seen some very noble efforts strand on lack of capital and every time that happens I realize that not everything is as simple as I would like it to be. I've just financed a - small - factory for something that I consider both useful and urgent, but my means are limited and it was clear that I had no profit motive (which actually means my money went a lot further than if I had had a profit motive).

Once you get into medical education or diagnostics the amounts usually run into the millions if you want to really move the needle. No single individual is going to put that out there on their own dime unless they were very wealthy to begin with. I've invested in a couple of companies like that. They all failed, predictably, because raising follow on investments for such stuff is very hard, even if you can get it to work in principle.

The best example of stuff like that that did work is how the artificial pancreas movement is pushed forward hard by people hacking sensors and insulin pumps. They have forced industry to wake up and smell the coffee: if they weren't going to be the ones to offer it then someone else inevitably would. Even so it is a hard problem to solve properly. But it is getting there:

https://rorycellanjones.substack.com/p/wearenotwaiting-the-p...

DrewADesign 5 days ago|||
> 684 words

I believe this is a simple educational quiz using a pre-selected set of images from cited medical publications to help people distinguish between certainly benign and potentially cancerous skin anomalies… Is that incorrect?

jacquesm 5 days ago||
Yes, that's correct.

But that won't stop people from believing they are now able to self diagnose.

DrewADesign 5 days ago||
Is that also a problem with pamphlets that juxtapose these same exact sort of images?
jacquesm 5 days ago||
> Is that also a problem with pamphlets that have these same exact sort of images?

Such pamphlets typically contain a lot more guidance on what the context is within which they are provided. They don't come across as a 'quiz' even if they use the same images and they do not try to give the impression of expertise gathered. They tend to be created by communications experts who realize full well what the result of getting it wrong can be. Compared to 'research on the internet' there is a lot of guidance in place to ensure that the results will be a net positive.

https://www.kanker.nl/sites/default/files/library_files/563/...

Is a nice example of such a pamphlet. You were complaining about the number of words I used. Check the number of words there compared to the number of words in the linked website.

There is no score, there is no 'swiping' and there is tons of context and raising of awareness, none of which is done by this app. I'm not saying such an app isn't useful, but I am saying that such an app without a lot of context is potentially not useful and may even be a negative.

DrewADesign 5 days ago||
Alrighty. I think you’re reading far far far too much into the implications of a slightly interactive version of a poster that was in my high school nurse’s office. I’m all set here. Have a good one.
jacquesm 5 days ago||
That 'slightly interactive' bit and the fact that it is now in the home rather than in your high school nurse's office is what makes all the difference here.
sungam 6 days ago||
Thanks for your comment - I'm pleased that people have found it useful and definitely only possible because of AI coding. I agree that this is likely to be applicable to non-experts in many different areas.
DrewADesign 6 days ago||
Absolutely. I hope you’ll encourage your colleagues to follow suit!
meindnoch 6 days ago||
Is this really "invasive melanoma"? https://drmagnuslynch.s3.eu-west-2.amazonaws.com/isic-images...
sungam 6 days ago||
According to the metadata supplied with the dataset yes

Could definitely be a misclassification, however a small proportion of moles that look entirely harmless to the naked eye and under the dermatoscope (skin microscope) can be cancerous.

For example, have a look at these images of naevoid melanoma: https://www.google.com/search?tbm=isch&q=naevoid+melanoma

This is why dermatology can be challenging and why AI-based image classification is difficult from a liability/risk perspective

I was previously clinical lead for a melanoma multidisciplinary meeting and 1-2 times per year I would see a patient with a melanoma that presented like this and looking back at previous photos there was no features that would have worried me.

The key thing that I emphasise to patients is that even if a mole looks harmless it is important to monitor for any signs of change since a skin cancer will almost always change in appearance over a period of several months

jonahx 5 days ago|||
> however a small proportion of moles that look entirely harmless to the naked eye and under the dermatoscope (skin microscope) can be cancerous.

That is very scary.

So the only way to be sure is to have everything sent to the lab. But I'm guessing cost/benefit of that from a risk perspective make it prohibitive? So if you're an unlucky person with a completely benign-presenting melanoma, you're just shit out of luck? Or will the appearance change before it spreads internally?

sungam 5 days ago||
This is why dermatology involves risk management not just image interpretation. Yes the lesion will likely change with time. Realistically yes, if you have a melanoma that looks like a harmless mole then the diagnosis is likely to be delayed. But remember that these are a tiny proportion of all skin cancers and you are much more likely to get some other form of cancer - most of which occur internally and cannot be seen at all.
kmoser 5 days ago||
This is a good example of what I find frustrating as a patient. Sure, cancers like that may be a tiny proportion of all skin cancers, but if I have it then it's 100% of my skin cancers. And given how serious skin cancer can be, I'd at least want my doctor to let me know how I could get this lesion tested, even if it's out of my own pocket.
daedrdev 5 days ago|||
The risk of misdiagnosis and thus unnecessary treatment, can mean that such testing can increase your actual chance of dying or decrease your life expectancy. It depends on the case but its why we don't test generically for cancers unless someone is high risk (such as being old)
sungam 5 days ago|||
I agree with you - if a patient is concerned by a specific skin lesion and requests removal then I will support this even it appears harmless particularly if it is new or changing
48terry 5 days ago|||
> According to the metadata supplied with the dataset yes

"idk but that's what it says" somehow this does not inspire confidence in the skin cancer learning app.

cindyllm 5 days ago||
[dead]
jonahx 6 days ago|||
Yeah that seems likely to be a misclassification...
globalise83 5 days ago||
As someone with literally every single possible variation of skin blemish, mole and God knows what else, this scares the living hell out of me.
abootstrapper 5 days ago||
Get a yearly full body skin check from a dermatologist. It’s a common thing. I’ve been doing it for years because of my skin type. They caught early Basal cell carcinoma the last time I went.
mewpmewp2 5 days ago||
Yeah, I only have just 1 concerning, but still made me spend 20 minutes googling difference between dermatofibroma and basal cell cancer. I think it is dermatofibroma, but I guess good point anyway to let it get checked out.
rfrey 6 days ago||
Perfect use of AI assisted coding - a domain expert creating a focused, relatively straightforward (from a programming perspective) app.

@sungam, if your research agenda includes creating AI models for skin cancer, feel free to reach out (email in profile), I make a tool intended to help pure clinical researchers incorporate AI into their research programmes.

sungam 6 days ago|
Thanks, I am not currently doing research in this area - my lab-based research is mainly focused on the role of fibroblasts in skin cancer development
jonahx 6 days ago|
Cool project, and helpful for learning.

One concern:

I don't believe the rates that you see "concerning" vs "not-concerning" in the app match the population rates. That is, a random "mole-like spot or thingy" on a random person will have have a much lower base rate of being cancerous than the app would suggest.

Of course, this is necessary to make the learning efficient. But unless you pair it with base rate education it will create a bias for over-concern.

sungam 6 days ago|
Yes you are right - the representation is biased due to the image dataset that I have used.

I don't think it would be useful to match the population distribution since the fraction of skin cancers would be tiny (less than 1:1000 of the images) so users would not learn what a skin cancer looks like, however in the next version I will make it closer to 50:50 and highlight the difference from the population distribution.

jonahx 5 days ago||
Yes. As I said matching the population base rate wouldn't be practical, so you'd need to educate on that separately from the identification learning.

Let's say I achieve a 95% on the app though. Most people would have a massively over-inflated sense of their correctness in the wild. If the actual fraction is only 1/1000 and I see a friend with a lesion I identify as concerning, then my actual success rate would be:

    1*0.95 / (0.05*999 + 1*0.95)
So ~1.8%, not 95%. Few people understand Bayesian updating.
sungam 5 days ago||
Thanks for this - I need to look at this more carefully
More comments...