> Can it just tell which x-rays belong to Black or female patients and then use some latent racism or misogyny to change the diagnosis?
The opposite. The dataset is for the standard model "white male", and the diagnoses generated pattern-matched on that. Because there's no gender or racial information, the model produced the statistically most likely result for white male, a result less likely to be correct for a patient that doesn't fit the standard model.
The better question is just "are you actually just selecting for symptom occurrence by socioeconomic group?"
Like you could modify the question to ask "is the model better at diagnosing people who went to a certain school?" and simplistically the answer would likely seem to be yes.
Then why is the headline not "AI models miss disease in Asian patients" or even "AI models miss disease in Latino patients"?
It just so happens to align with what maximizes political capital in today's world.