_bin_ 5 days ago

This seems like a problem that should be worked on

It also seems like we shouldn't let it prevent all AI deployment in the interim. It is better that we take the disease detection rate for part of the population up a few percent than we do not. Plus it's not like doctors or radiologists always diagnose at perfectly equal accuracy across all populations.

Let's not let the perfect become the enemy of the good.

2
nradov 5 days ago

False positive diagnoses cause a huge amount of patient harm. New technologies should only be deployed on a widespread basis when they are justified based on solid evidence-based medicine criteria.

nonethewiser 5 days ago

No one says you have to use the AI models stupidly.

If it works poorly for black women and female women dont use it for them.

Or simply dont use it for the initial diagnosis. Use it after the normal diagnosis process as more of a validation step.

Anyways, this all points to the need to capture biological information as input or even having seperately models tuned to different factors.

Avshalom 5 days ago

Every single AI company says you should use AI models stupidly. Replacing experts is the whole selling point.

nonethewiser 5 days ago

OK so should we optimize for blindly listening to AI companies then?

Avshalom 5 days ago

We should assume people will use tools in the manner that they have been sold those tools yes.

nonethewiser 5 days ago

But these tools include research like this. This research is sold as proof that AI models have problems with bias. So by your reasoning I'd expect doctors to be wary of AI models.

Avshalom 5 days ago

doctors aren't being sold this. Private equity firms that buy hospitals are.

nradov 5 days ago

The guidelines on how to use a particular AI model can only be written after extensive clinical research and data analysis. You can't skip that step without endangering patients, and it will take years to do properly for each one.

acobster 5 days ago

> having seperately models tuned to different factors.

Sure. Separate but equal, presumably.

nonethewiser 5 days ago

Whats the alternative? Withholding effective tools because they arent effective for everyone? One model thats worse for everyone?

This is what personalized medicine is, and it gets more individualistic than simply classifying people by race and gender. There are a lot of medical gains to be made here.

acobster 5 days ago

I'm not arguing against using the models per se. It's just that this is a social problem, to which there's no good technical solution. The hard road of social change is the only real alternative.

nradov 5 days ago

Citation needed. Personalized medicine seems like a great idea in principle, but so far attempts to put it into practice have been underwhelming in terms of improved patient outcomes. You seem to be assuming that these tools actually are effective, but generally that remains unproven.

bilbo0s 5 days ago

Mmmm...

You don't work in healthcare do you?

I think it would be extremely bad if people found out that, um, "other already disliked/scapegoated people", get actual doctors and nurses working on them, but "people like me" only get the doctor or nurse checking an AI model.

I'm saying that if you were going to do that, you'd better have an extremely high degree of secrecy about what you were doing in the background. Like, "we're doing this because it's medical research" kind of secrecy. Because there's a bajillion ways that could go sideways in today's world. Especially if that model performs worse than some rockstar doctor that's now freed up to take his/her time seeing the, uh, "other already disliked/scapegoated population".

Your hospital or clinic's statistics start to look a bit off.

Joint commission?

Medical review boards?

Next thing you know certain political types are out telling everyone how a certain population is getting preferential treatment at this or that facility. And that story always turns into, "All around the nation they're using AI to get <scapegoats> preferential treatment".

It's just a big risk unless you're 100% certain that model can perform better than your best physician. Which is highly unlikely.

This is the sort of thing you want to do the right way. Especially nowadays. Politics permeates everything in healthcare right now.