Tesla's Dojo supercomputer is DOA – now what? (theverge.com)
1 points by CharlesW 12m ago 1 comments
Deep Dive into Advanced N8n Node Development (intelda.ca)
2 points by swengcrunch 31m ago 1 comments
AI Eroded Doctors' Ability to Spot Cancer Within Months in Study
43 zzzeek 44 8/13/2025, 12:51:21 AM bloomberg.com ↗
My follow up question is now “if junior doctors are exposed to AI through their training is doctor + AI still better overall?” e.g. do doctors need to train their ‘eye’ without using AI tools to benefit from them.
I agree.
I work in healthcare and if you take a tech view of all the data there are a lot of really low hanging fruit to pick to make things more standardised and efficient. One example is extracting data from patient records for clinical registries.
We are trying to automate that as much as possible but I have the nagging sense that we’re now depriving junior doctors of the opportunity to look over hundreds of records about patients treated for X to find the data and ‘get a feel’ for it. Do we now have to make sure we’re explicitly teaching something since it’s not implicitly being done anymore? Or was it a valueless exercise.
The assumptions that we make about training on the job are all very chesterton’s fence really.
Paper link: https://www.thelancet.com/journals/langas/article/PIIS2468-1...
Supprt: Statistically speaking, on average, for each 1% increase in ADR, there is a 3% decrease in the risk of CRC. (colorectal cancer)
My objection is all the decimal points without error bars. Freshman physics majors are beat on for not including reasonable error estimates during labs, which massively overstates how certain they should be; sophomores and juniors are beat on for propogating errors in dumb ways that massively understates how certain they should be.
This article is up strolls rando doctor (granted: with more certs than I will ever have) with a bunch of decimal points. One decimal point, but that still looks dumb to me. What is the precision of your measuring device? Do you have a model for your measuring device? Are you quite sure that your study, given error bars, which you don't even acknowledge the existence of, don't cancel out the study?
> While short-term data from randomized controlled trials consistently demonstrate that AI assistance improves detection outcomes...
Perhaps a monthly session to practice their skills would be useful? So they don’t atrophe…
Instead, I would hope that we can engineer around the downtime. Diagnosis is not as time-critical as other medical procedures, so if a system is down temporarily they can probably wait a short amount of time. And if a system is down for longer than that, then patients could be sent to another hospital.
That said, there might be other benefits to keeping doctors' skills up. For example, I think glitches in the diagnosis system could be picked up by doctors if they are double-checking the results. But if they are relying on the AI system exclusively, then unusual cases or glitches could result in bad diagnoses that could otherwise have been avoided.
The x ray machine would still work, it’s connected directly to a PC. A doctor can look at the image on the computer without asking some fancy cloud AI.
A power outage on the other hand is a true worst case scenario but that’s different.
Like one of the biggest complaints I've heard around hospital IT systems is how brittle they are because there are a million different vendors tied to each component. Every new system we add makes it more brittle.
Then we could just go Google it, and/or skim the Wikipedia page. If you wanted more details you could follow references - which just made it easier to do the first point.
Now skills themselves will be subject to the same generalizing phenomenon as finding information.
We have not seen information-finding become better as technology has advanced. More people are able to become barely-capable regarding many topic, and this has caused a lot of fragmentation, and a general lowering of general knowledge with regard to information.
The overall degradation that happened with politics and public information will now be generalized to anything that AI can be applied to.
You race your MG? Hey my exoskeleton has a circuit racer blob we should go this weekend. You like to paint? I got this Bougereau app I'll paint some stuff for you. You're a physicist? The font for chalk writing just released so maybe we can work on the grand unified theory sometime, you say you part and I can query the LLM and correct your mistakes
I think we have to treat the algorithm as a medical tool here, whose maintenance will be prioritised as such. So your premise is similar to "If all the scalpels break...".
The premise is absolutely not the same.
No comments yet
To continue to torture analogies, and be borderline flippant, almost no one can work an abacus like old the masters. And I don't think it's worth worrying about. There is an opportunity cost in maintaining those abilities.
No comments yet
My solution is increase the amount I write purely by hand.
However, I would like someone to explain this to me: If I haven't needed these skills in enough time for then to atrophy, what catastrophic event has suddenly happened that means I now urgently need them?
This just sounds very much like the old "we've forgotten how to shoe our own horses!" argument to me, and exactly as relevant.
The scenario we want to avoid is:
"sorry, your claim was denied, the AI said your condition did not need that treatment. You'll have to sell your house."
I’m sure similar things have been said with:
- calculators & impact on math skills
- sewing machines & people’s stitching skills
- power tools & impacts on craftsmanship.
And for all of the above, there’s both pros and cons that result.
If we accidentally put ourselves in a position where humans fundamental skills are being eroded away, we could potentially lose our ability to make deep progress in any non-AI field and get stuck in a suboptimal and potentially dangerous trajectory.
For example, (a) we’ve lost the knowledge of how the Egyptian pyramids were built. Maybe that’s okay, maybe it’s not. (b) On a smaller scale, we’ve also forgotten how to build quality horse-and-buggies, and that’s probably fine since we now live in a world of cars. (c) We almost forgot how to send someone to the moon, and that was just in the last 50-years (and that’s very bad).