bko 5 days ago

Suppose you have a system that saves 90% of lives on group A but only 80% of lives in group B.

This is due to the fact that you have considerably more training data on group A.

You cannot release this life saving technology because it has a 'disparate impact' on group B relative to group A.

So the obvious thing to do is to have the technology intentionally kill ~1 out of every 10 patients from group A so the efficacy rate is ~80% for both groups. Problem solved

From the article:

> “What is clear is that it’s going to be really difficult to mitigate these biases,” says Judy Gichoya, an interventional radiologist and informatician at Emory University who was not involved in the study. Instead, she advocates for smaller, but more diverse data sets that test these AI models to identify their flaws and correct them on a small scale first. Even so, “Humans have to be in the loop,” she says. “AI can’t be left on its own.”

Quiz: What impact would smaller data sets have on efficacy for group A? How about group B? Explain your reasoning

4
janice1999 5 days ago

> You cannot release this life saving technology because it has a 'disparate impact' on group B relative to group A.

Who is preventing you in this imagined scenario?

There are drugs that are more effective on certain groups of people than others. BiDil, for example, is an FDA approved drug marketed to a single racial-ethnic group, African Americans, in the treatment of congestive heart failure. As long as the risks are understood there can be accommodations made ("this AI tool is for males only" etc). However such limitations and restrictions are rarely mentioned or understood by AI hype people.

bko 5 days ago

What does this have to do with FDA or drugs? Re-read the comment I was replying to. It's complaining that a technology could serve one group of people better than another, and I would argue that this should not be our goal.

A technology should be judged by "does it provide value to any group or harm any other group". But endlessly dividing people into groups and saying how everything is unfair because it benefits group A over group B due to the nature of the problem, just results in endless hand-wringing and conservatism and delays useful technology from being released due to the fear of mean headlines like this.

bilbo0s 5 days ago

No. That's not how it works.

It's contraindication. So you're in a race to the bottom in a busy hospital or clinic. Where people throw group A in a line to look at what the AI says, and doctors and nurses actually look at people in group B. Because you're trying to move patients through the enterprise.

The AI is never even given a chance to fail group B. But now you've got another problem with the optics.

JumpCrisscross 4 days ago

> You cannot release this life saving technology because it has a 'disparate impact' on group B relative to group A

I think the point is you need to let group B know this tech works less well on them.

potsandpans 5 days ago

Imagine if you had a strawman so full of straw, it was the most strawfilled man that ever existed.

bko 5 days ago

From the article:

> “What is clear is that it’s going to be really difficult to mitigate these biases,” says Judy Gichoya, an interventional radiologist and informatician at Emory University who was not involved in the study. Instead, she advocates for smaller, but more diverse data sets that test these AI models to identify their flaws and correct them on a small scale first. Even so, “Humans have to be in the loop,” she says. “AI can’t be left on its own.”

What do you think smaller data sets would do to a model? It'll get rid of disparity sure

milesrout 5 days ago

It is a hypothetical example not a strawman.