Show HN: I'm a dermatologist and I vibe coded a skin cancer learning app
401 sungam 245 9/7/2025, 10:38:29 AM molecheck.info ↗
Coded using Gemini Pro 2.5 (free version) in about 2-3 hours.
Single file including all html/js/css, Vanilla JS, no backend, scores persisted with localStorage.
Deployed using ubuntu/apache2/python/flask on a £5 Digital Ocean server (but could have been hosted on a static hosting provider as it's just a single page with no backend).
Images / metadata stored in an AWS S3 bucket.
A dermatologist a short while ago with this idea would have to find a willing and able partner to do a bunch of work -- meaning that most likely it would just remain an idea.
This isn't just for non-tech people either -- I have a decades long list of ideas I'd like to work on but simply do not have time for. So now I'm cranking up the ol' AI agents an seeing what I can do about it.
It’s hard to take the name “vibe coding” seriously, and maybe that was the whole point, but I feel like AI coding is a bit more serious than the name “vibe coding” implies.
Anyone that disagrees that it should be taken more seriously can surely at least agree that it’s likely it will cross that threshold in the not too distant future, yet we’re still going to be stuck with the silly name.
Even worse is the tendency for scripting languages tend to try to be robust against errors, so you end up with programs that are filled with extremely subtle nuance in things like their syntax parsing which, in many ways, makes them substantially more complex than languages with extremely strict syntactic enforcement.
No comments yet
It would be nice if we could have the cake and eat it here. With LLM:s there's certainly opportunities if we can find ways to allow both custom scripting and large scale domain constrained logic.
In a similar way, VBA was amazing in MS Office back in the day. If you ever saw someone who was good at Visual Basic in Excel, it’s impressive the amount of work that could get done in Excel by a motivated user who would have been hesitant to call themselves a programmer.
For use on others, no. It's not about just the quality, it's about not even knowing what you're selling.
Lose the tunnel vision.
Focusing on "but security lol" is a bad take, IMO. Every early attempt is bad at something. Be it security, or scale, or any number of problems. Validating early is good. Giving non-tech people a chance is good. If an idea is worth pursuing, you can always redo it with "experts". But you can't afford experts (hell, you can't even afford amateurs) for every idea you want put into an MVP.
MVP means just enough engineered code to solve a problem, rough around the edges and lacking features sure, but not built by someone who has literally no idea what they were doing.
Prototypes of physical products are never put into production and sold to consumers. Unfortunately software prototypes "run", and are sold at that point. Then they begin to scale, and the inherent flaws in their design are amplified. The same thing used to happen with MS Access apps; the same thing still happens with "low code" solutions.
The engineers cost just as much after the prototype phase, but if you don't hire them to build your MVP then you never have one.
Everyone wants to pretend that the software used to be better, but the reality is that MVPs and sometimes even public launches were always a house of cards.
True productivity is when what is produced is of benefit.
The only kind of productivity is progress toward somebody's arbitrary goals. There's nothing "true" about it.
I've been coding professionally for ~20 years now, so it's not that I don't know what to do, it's just a time sink
Now I'm blasting through them with AI and getting them out there just in case
They're a bit crap, but better than not existing at all, you never know
Most such prototypes get tossed because of a flaw in the idea, not because they lacked professional software help. If something clicks the prototype can get rebuilt properly. Raising the barriers to entry means significantly fewer things get tried. My 2c.
Not in an industry where prototypes very often get thrown into production because decision makers don't know anything about the value of good tech, security, etc
I don't agree. I think because of llm/vibe coding my random ideas I've actually wasted more time then if I did them manually. The vibe code as you said is often crap and often after I spend a lot of time on it. Realize that there are countless subtle errors that mean its not actually doing what I was intending at all. I've learned nothing and made a pointless app that does not even do anything but looks like it does.
Thats the big allure that has been keeping "AI" hype floating. It always seems so dang close to being a magic wand. Then upon time spent reviewing and a critical eye you realize it has been tricking you like a janitor that is just sweeping dirt under the rug.
At this point I've relegated LLM to advanced find replace and Formatted data structuring(Take this list make it into JSON) and that's about it. There are basically tools that do everything else llms do that already exist and do it better.
I can't count at this point how many times "AI" has taken some sort of logic I want then makes a bunch of complex looking stuff that takes forever to review and I find out it fudged the logic to simply always be true/false when its not even a boolean problem.
You just need one program that can read the training data, train a model, and then do the classification based on input images from the user.
This works for basically any kind of image, whether it's dogs/cats or skin cancer.
You can take the code from a dog/cat classifier and use it for anything.
You only need to change the training data.
And, more importantly, I don't think you'll see good results either from a vibe-coded solution.
So I don't think your comment makes sense here.
Indeed. It is first and foremost a statistics and net patient outcomes problem.
The image classification bit - to the best of the current algorithms abilities - is essentially a solved problem (even if it isn't quite that simple), and when better models become available you plug those in instead. There is no innovation there.
The hard part is the rest of it. And without a good grounding in medical ethics and statistics that's going to be very difficult to get right.
"Vibe coding" does a surprisingly good job at this problem.
Yes it does. :)
I'm not going to argue with the idea that a pre-made classifier can be improved upon by experts.
But pre-made classifiers exist and are useful for a very large variety of tasks. This was the original point.
> You can take the code from
https://xkcd.com/2501/
More seriously, for most non-programmers, even typing into a console is "coding."
Unfortunately, I do advanced knowledge work, and the tools I need technically often exist...if you're a coder.
Coding is not that accessible. The intermediary mental models and path to experience required to understand a coding task are not available to the average person.
Lots of people like computers but earn a living doing something else
I went from 50% to 85% very quickly. And that’s because most of them are skin cancer and that was easy to learn.
So my only advice would be to make closer to 50% actually skin cancer.
Although maybe you want to focus on the bad ones and get people to learn those more.
This was way harder than I thought this detection would be. Makes me want to go to a dermatologist.
Of course in reality the vast majority of skin lesions and moles are harmless and the challenge is identifying those that are not and I think that even a short period of focused training like this can help the average person to identify a concerning lesion.
If I were to code this for "real training" of a dermatologist, I'd make this closer to "real world" training rate. As a dermatologist, I'll imagine that probably just 1 out of 100 (or something like that) skin lesions that people could imagine are cancerous, actually are so.
With the current dataset, there're just too many cancerous images. This makes it kind of easy to just flag something as "cancerous" and still retain a good "score" - but the point is moot, if as a dermatologist you send _too many_ people without cancer to do further exams, then you're negating the usefulness of what you're doing.
And then once they have learned you get progressively harder and harder. Basically the closer to 50% you are the harder it will be to have a score higher than chance/50%.
I wish it also explained the decision making process, how to understand from the picture what is the right answer.
I'm really getting lost between melanoma and seborrheic keratosis / nevus.
I went through ~120 pictures, but couldn't learn to distinguish those.
Also, the guide in the burger menu leads to a page that doesn't exist: https://molecheck.info/how-to-recognise-skin-cancer
Being honest I didn't expect anyone apart from a few of may patients to use the app and certainly did not expect front page HN!
Thanks for making this! A bit more polish and this is something I’d make sure everyone in my family has played with.
Imagine a world where every third person is able to recognise worrying skin lesions early on.
They will go through your images until they get a good score, believe themselves and expert and proceed to diagnose themselves (and their friends).
By the time you have an image set that is representative and that will actually educate people to the point where they know what to do and what not to do you've created a whole raft of amateur dermatologists. And the result of that will be that a lot of people are going to knock on the doors of real dermatologists who might tell them not to worry about something when they are now primed to argue with them.
I've seen this pattern before with self diagnosis.
Pictures with purple circles (e.g. faded pen ink on light skin outlining the area of concern) are a strong indicator of cancer. :wink:
It's quite interesting to have a binary distinction: 'concerned vs not concerned', which I guess would be more relevant for referring clinicians, rather than getting an actual diagnosis. Whereas naming multiple choice 'BCC vs melanoma' would be more of a learning tool useful for medical students..
Echoing the other comments, but it would be interesting to match the cards to the actual incidence in the population or in primary care - although it may be a lot more boring with the amount of harmless naevi!
For the patient I think the decision actually is binary - either (i) I contact a doctor about this skin lesion now or (ii) I wait for a bit to see what happens or do nothing. In reality most skin cancers are very obvious even to a non-expert and the reason they are missed are that patients are not checking their skin or have no idea what to look for.
I think you are right about the incidence - would be better to be a more balanced distribution of benign versus malignant, but I don't think it would be good to just show 99% harmless moles and 1% cancers (which is probably the accurate representation of skin lesions in primary care) since it would take too long for patients to learn the appearance of skin cancer.
I am a skin cancer doctor in Queensland and all I do is find and remove skin cancers (find between 10 and 30 every day). In my experience the vast majority of cancers I find are not obvious to other doctors (not even seen by them), let alone obvious to the patient. Most of what I find are BCCs, which are usually very subtle when they are small. Even when I point them out to the patient they still can't see them.
Also, almost all melanomas I find were not noticed by the patient and they're usually a little surprised about the one I point to.
In my experience the only skin cancers routinely noticed by patients are SCCs and Merkel cell carcinomas.
With respect, if "most skin cancers are very obvious even to a non-expert" I suggest the experts are missing them and letting them get larger than necessary.
I realise things will be different in other parts of the world and my location allows a lot more practice than most doctors would get.
Update: I like the quiz. Nice work! In case anyone is wondering, I only got 27/30. Distinguishing between naevus and melanoma without a dermatoscope on it is sometimes impossible. Get your skin checked.
I have quite a few patients from the UK who have had several skin cancers. Invariably they went on holidays to Italy or Spain as a child and soaked up the sun.
Keep up the great work.
Could definitely be a misclassification, however a small proportion of moles that look entirely harmless to the naked eye and under the dermatoscope (skin microscope) can be cancerous.
For example, have a look at these images of naevoid melanoma: https://www.google.com/search?tbm=isch&q=naevoid+melanoma
This is why dermatology can be challenging and why AI-based image classification is difficult from a liability/risk perspective
I was previously clinical lead for a melanoma multidisciplinary meeting and 1-2 times per year I would see a patient with a melanoma that presented like this and looking back at previous photos there was no features that would have worried me.
The key thing that I emphasise to patients is that even if a mole looks harmless it is important to monitor for any signs of change since a skin cancer will almost always change in appearance over a period of several months
That is very scary.
So the only way to be sure is to have everything sent to the lab. But I'm guessing cost/benefit of that from a risk perspective make it prohibitive? So if you're an unlucky person with a completely benign-presenting melanoma, you're just shit out of luck? Or will the appearance change before it spreads internally?
"idk but that's what it says" somehow this does not inspire confidence in the skin cancer learning app.
No comments yet
It’s just a bummer that it’s far more frequently used to pump wealth to tech investors from the entire class of people that have been creating things on the internet for the past couple of decades, and that projects like this fuel the “why do you oppose fighting cancer” sort of counter arguments against that.
"This is awesome. Great use of AI to realize an idea. Subject matter experts making educational tools is one of the most hopeful things to come out of AI.
It’s just a bummer that it’s far more frequently used to pump wealth to tech investors from the entire class of people that have been creating things on the internet for the past couple of decades, and that projects like this fuel the “why do you oppose fighting cancer” sort of counter arguments against that."
Let's take that bit by bit then if you find it hard to correlate.
> This is awesome.
Agreed, it is a very neat demonstration of what you can do with domain knowledge married to powerful technology.
> Great use of AI to realize an idea.
This idea, while a good one, is not at all novel and does not require vibe coding or LLMs in any way, but it does rely on a lot of progress in image classification in the last decade or so if you want to take it to the next level. Just training people on a limited set of images is not going to do much of anything other than to inject noise into the system.
> Subject matter experts making educational tools is one of the most hopeful things to come out of AI.
Well.. yes and no. It is a hopeful thing but it doesn't really help when releasing it bypasses the whole review system that we have in place for classifying medical devices. And make no mistake: this is a medical diagnostic device and it will be used by people as such even if it wasn't intended as such. There is a fair chance that the program - vibe coded, remember? - has not been reviewed and tested to the degree that a medical device normally would be and that there has been no extensive testing in the field to determine what the effect on patient outcomes of such an education program is. This is a difficult and tricky topic which ultimately boils down to a long - and possibly expensive - path on the road to being able to release such a thing responsibly.
> It’s just a bummer that it’s far more frequently used to pump wealth to tech investors from the entire class of people that have been creating things on the internet for the past couple of decades
As I wrote, I'm familiar with quite a few startups in this domain. Education and image classification + medical domain knowledge is - and was - investable and has been for a long time. But it is not a simple proposition.
> and that projects like this fuel the “why do you oppose fighting cancer” sort of counter arguments against that.
Hardly anybody that I'm aware of - besides the Trump administration - currently opposes fighting cancer, there are veritable armies of scientists in academia and outside of it doing just that. This particular kind of cancer is low hanging fruit because (1) it is externally visible and (2) there is a fair amount of training data available already. But even with those advantages the hard problems, statistics, and ultimately the net balance in patient outcomes if you start using the tool at scale are where the harsh reality sets in: solving this problem for the 80% of easy to classify cases is easy by definition. The remaining 20% are hard, even for experts, more so for a piece of software or a person trained by a piece of software. Even a percentage point or two shift in the confusion matrix can turn a potentially useful tool into a useless one or vice versa.
That's the problem that people are trying to solve, not the image classification basics and/or patient education, no matter how useful these are when used in conjunction with proper medical processes.
But props to the author for building it and releasing it, I'm pretty curious about what the long term effect of this is, I will definitely be following the effort.
Better like that?
I also hope it will completely kill US pharmacy conglomerate.
AI was trained on public domain knowledge. All things we get from it should be free and available everywhere.
I can only hope.
That's a valid hope, but not a very realistic one just yet. The error rates are just too high. Medicine is messy and complex. Yes, doctors get it wrong every now and then. But AI gets it wrong far more frequently, still. It can be used as a tool in the arsenal of the medical professional, but we are very far away from self-service diagnosis for complex stuff.
> I also hope it will completely kill US pharmacy conglomerate.
That is mostly based on molecules and patents, not so much on diagnostics, that's a different group of companies.
> AI was trained on public domain knowledge. All things we get from it should be free and available everywhere.
Not necessarily, but for the cases where it is I agree that the models should be free and open.
> I can only hope.
Yes. I've seen some very noble efforts strand on lack of capital and every time that happens I realize that not everything is as simple as I would like it to be. I've just financed a - small - factory for something that I consider both useful and urgent, but my means are limited and it was clear that I had no profit motive (which actually means my money went a lot further than if I had had a profit motive).
Once you get into medical education or diagnostics the amounts usually run into the millions if you want to really move the needle. No single individual is going to put that out there on their own dime unless they were very wealthy to begin with. I've invested in a couple of companies like that. They all failed, predictably, because raising follow on investments for such stuff is very hard, even if you can get it to work in principle.
The best example of stuff like that that did work is how the artificial pancreas movement is pushed forward hard by people hacking sensors and insulin pumps. They have forced industry to wake up and smell the coffee: if they weren't going to be the ones to offer it then someone else inevitably would. Even so it is a hard problem to solve properly. But it is getting there:
https://rorycellanjones.substack.com/p/wearenotwaiting-the-p...
I believe this is a simple educational quiz using a pre-selected set of images from cited medical publications to help people distinguish between certainly benign and potentially cancerous skin anomalies… Is that incorrect?
But that won't stop people from believing they are now able to self diagnose.
Such pamphlets typically contain a lot more guidance on what the context is within which they are provided. They don't come across as a 'quiz' even if they use the same images and they do not try to give the impression of expertise gathered. They tend to be created by communications experts who realize full well what the result of getting it wrong can be. Compared to 'research on the internet' there is a lot of guidance in place to ensure that the results will be a net positive.
https://www.kanker.nl/sites/default/files/library_files/563/...
Is a nice example of such a pamphlet. You were complaining about the number of words I used. Check the number of words there compared to the number of words in the linked website.
There is no score, there is no 'swiping' and there is tons of context and raising of awareness, none of which is done by this app. I'm not saying such an app isn't useful, but I am saying that such an app without a lot of context is potentially not useful and may even be a negative.
One concern:
I don't believe the rates that you see "concerning" vs "not-concerning" in the app match the population rates. That is, a random "mole-like spot or thingy" on a random person will have have a much lower base rate of being cancerous than the app would suggest.
Of course, this is necessary to make the learning efficient. But unless you pair it with base rate education it will create a bias for over-concern.
I don't think it would be useful to match the population distribution since the fraction of skin cancers would be tiny (less than 1:1000 of the images) so users would not learn what a skin cancer looks like, however in the next version I will make it closer to 50:50 and highlight the difference from the population distribution.
Let's say I achieve a 95% on the app though. Most people would have a massively over-inflated sense of their correctness in the wild. If the actual fraction is only 1/1000 and I see a friend with a lesion I identify as concerning, then my actual success rate would be:
So ~1.8%, not 95%. Few people understand Bayesian updating.@sungam, if your research agenda includes creating AI models for skin cancer, feel free to reach out (email in profile), I make a tool intended to help pure clinical researchers incorporate AI into their research programmes.
We take spreadsheet for granted. VisiCalc back in the day unlocked computing for an average person in the same way AI does today. Back then to tabulate some stats you’d need a team of programmers. When spreadsheets became available, anyone could figure out how to essentially program a computer without software background.
It would be interesting to see how spreadsheets failed/succeeded to learn the limits of vibe coding. For example it’s a common meme that you find teams using spreadsheets as databases. Perhaps they are so successful that they end up being misused. Would the same happen with AI coding?
I was thinking to train a convnet to accurately classify pictures of moles as normal vs abnormal. The user can take a photo and upload it to a diagnostic website and get a diagnosis.
It doesn’t seem like an overly complex model to develop and there is plenty of data referring to photos that show normal vs abnormal moles.
I wonder why a product hasn’t been developed, where we are using image detection on our phones to actively screen for skin cancer. Seems like a no brainer.
My thinking is there are not enough deaths to motivate the work. Dying from melanoma is nasty.
Regarding AI-assisted skin cancer diagnosis: This is a huge area that started with the publication of Esteva et al (https://www.nature.com/articles/nature21056) and there have been hundreds of publications since. There are large publicly available datasets that anyone can work with (https://challenge.isic-archive.com/).
My lab has previously trained / evaluated convnets for diagnosis of skin cancer e.g. see this publication: https://pubmed.ncbi.nlm.nih.gov/32931808/
I have no doubt that it will be possible to train an AI model to perform at the same level as a dermatologist and AI models will become increasingly relevant. The main challenge at the moment is navigating uncertainty / liability since a very small proportion of moles / skin lesions that appear entirely harmless both the naked eye and with the dermatoscope (skin microscope) are cancerous.
Perhaps you may want to question your bias and ability to process criticism.
Anyone who shares their ideas publicly will receive criticism. Not only is it ok, it’s helpful to expand the discussion beyond your bias.
Maybe to you, but others in this thread found it interesting.
> Lowers the bar for what good software really is.
Software is a means to some end, not the end in itself. I can make the best coded software that does nothing [0], there is no point to that other than to practice one's skills, but again, those skills are to achieve something in the end.
[0] https://github.com/EnterpriseQualityCoding/FizzBuzzEnterpris...
Further, your suggestions are inactionable, and again, miss the point. It’s a low effort - “Lol, why don’t you just…”. No, the point is not to find skin cancer. The point is to show a bunch of pictures to people who are interested, and let them see if they can identify worrying skin lesions.
Improved my score from an abysmal 40% in under 15 units to above 95% accuracy. Also realize that I have skin lesion that warrant an immediate dermatologist visit.
Your characterizations are unnecessarily salty.
AFAIK Netflix got rid of their 5 star rating as the signal over 2 stars wasn’t worth the mental overhead from users having to decide between a 4 and a 5. Also star rating are culturally dependent so you have to normalize for that effect. In general it’s a total hassle.
The app already exists btw. Did nobody in this thread google it before saying it couldn't work?
As I understand it, size is one of the key indicators of melanoma. But in some of these images, it’s difficult to tell whether the mole is 1 mm or 10 mm. I assume your image set doesn’t include size information. If you can find sources with rulers or some kind of scale, that would be very helpful.
FWIW @sungam - I'm one of the maintainers of the ISIC Archive, so feel free to let me know if finding/downloading data could be made easier. It's always interesting to see people using our data in the wild :)
Fortunately basal cell carciomas are very slow growing and do not spread elsewhere in the body or cause other health issues and a delay of a few months in diagnosis does not have a big impact on outcome.
Seeing someone actually build something like this, even if it's not perfect, gives me a sense of hope. When you combine domain expertise with some AI tools, you don’t have to wait around for someone else. You can just start.
OP, what are some of the other common options for a spot on the body aside from common moles, cancer, and keratoses? Solar lentigines, freckles, bug bites, eczema? I'm also curious what the actual chance of cancer is given a random mole anywhere on the body, obviously a more involved question.
The chance of a random skin lesion being skin cancer is extremely low. Apart from the appearance key things to look for are a lesion that is not going away particularly if it is changing in appearance.
Here are some other common skin lesions: - Dermatofibroma (harmless skin growth) - Actinic keratosis (sun damage) - Milium - Comedome - Acne pustule / nodule - Viral wart - Molluscum contagiousum (harmless viral growth) - Cherry angioma (harmless blood vessel growth) - Spider naevus (another type of blood vessel growth)
There are more than 2000 diagnoses in dermatology so not an exhaustive list!
It was done by a small team in Hungary, with the support of MDs of course. (I would guess that the majority of the work was coordinating with MDs, getting them to teach the software... and collecting photos of lesions. Must have been fun!)
They probably could not monatize it (or were not interested, or it was just too much work for a side hustle)... the sad reality of living in Eastern Europe.
I do think that the idea is perfect, it is non-invasive, but could warn you of a potentially very dangerous condition in time. You don't have to wait for the doctor, or unnecessarily visit them. I would actually pay for this as a service.
Bar a lack of a vibrant VC scene they have the very same monetization option one in SF would have.
The most probable reason they did not was to avoid assuming legal responsibility for the results.
Would be useful to add some explanation on the defining features that would give it away to a dermatologist.
The question could be: What images are most often mistaken? What characteristics do they share? Knowing the highest false negative images would be really valuable people to know what not to ignore.
If anyone is interested: Coded using Gemini Pro 2.5 (free version) in about 2-3 hours. Single file including all html/js/css, Vanilla JS, no backend, scores persisted with localStorage.
Photos of basal cell carcinoma (no affiliation): https://dermnetnz.org/topics/basal-cell-carcinoma
I would love to see more of such classifiers for other medical conditions, googling for images tends not to produce a representative sample.
May have been in the training data.
I just considered that the language model (Gemini) may have been especially effective at coding this specific app idea, sine my old app (which is on GitHub) was probably in the data it was trained on.
My dad passed away from squamous cell carcinoma in 2010. In retrospect, through my casual research into the space and tools like this one, it occurs to me that the entire event was likely preventable and occurred merely because we did not react quickly enough to the cancer’s presence.
¹ https://mikelovesrobots.substack.com/p/wheres-the-shovelware...
Nice app. But wouldn't a doctor normally get a history as well? Anyway, I'm not a doctor which is probably why I got most of the answers wrong :)
The app is hosted on my Digital Ocean server that hosts a few other projects including my Revessa Health site
Single file including all html/js/css, Vanilla JS, no backend, scores persisted with localStorage.
Deployed using ubuntu/apache2/python/flask on a £5 Digital Ocean server (but could have been hosted on a static hosting provider as it's just a single page with no backend).
Images / metadata stored in an AWS S3 bucket.
I don't fucking care if we all lose our jobs, as long as we also fuck the VCs by making them obsolete.
Think a set number of questions to start with would be good. Not sure if there’s an end point, I drifted off after ~20 or so
YAY, three cheers for all the soy boys building AI. See you on unemployment soon.
Will add asap but currently focused on answering questions!
Fortunately my £5 Digital Ocean server is coping fine so far...
Idea: distribution of player scores
I'm going to get some models checked out.
I have been working with a startup to try and develop a non-invasive molecular test for melanoma so hopefully this will be possible in the future.
No comments yet
https://share.icloud.com/photos/048Y_ALNTMwlP3QqZNwja5HEQ
One or two seemed quite obvious to me as concerning or not but turned out to be the other way
For personal fulfillment, humanities evolutionary fitness, and for commercial purposes
This app ultimately amounts to something that has been done millions of times, and so I think it's actually quite empowering for individuals to be able to quickly build mockups of apps like this for themselves without needing to spend upwards of $75/hr to hire some freelance dev to do it for them.
"Vibe coded" - asked ChatGPT or whatever alternative to do the thing for me. There is no fucking vibe here, just another cheesy term.