rjakob 6 days ago

Wouldn't that just require a robust, predefined ruleset we could all agree on? Let's make the dream come true!

1
yusina 6 days ago

The rule set is simple: "Don't be biased." What does that mean? And that is the problem. It's hard (read: impossible) to define in technical, formal terms. That's because bias is at the root a social problem, not a technical one. Therefore you won't be able to solve it with technology. Just like poverty, world peace, racism.

The best you can hope for is to provide technical means to point out indicators of bias. But anything beyond that could, at worst, do more harm than good. ("The tool said this result is unbiased now! Keep your skepticism to yourself and let me publish!")

tbrownaw 5 days ago

> That's because bias is at the root a social problem, not a technical one.

Bias is systematic error.

Maybe your thermometer just always reads 5° high.

Maybe it reads high on sunny days and low on rainy days.

Bias is distinct from random error, say if it's an electronic thermometer with a loose wire.

For classification problems, there's also this impossibility result: https://www.marcellodibello.com/algorithmicfairness/handout/...

nathan_compton 5 days ago

Not in the context of this conversation.

rjakob 5 days ago

Then let's try to be the least biased and fully transparent (which should also help with bias)