I dunno. My perspective is that I've worked in ML for 30+ years now and over time, unsupervised clustering and direct featurization (IE, treating the image pixel as the features, rather than extracting features) have shown great utility in uncovering subtle correlations that humans don't notice. Sometimes, with careful analysis, you can sort of explain these ("it turns out the unlabelled images had the name of the hospital embedded in them, and hospital 1 had more cancer patients than hospital 2 patients because it was a regional cancer center, so the predictor learned to predict cancer more often for images that came from hospital 1") while other cases, no human, even a genius, could possibly understand the combination of variables that contributed to an output (pretty much anything in cellular biology, where billions of instances of millions of different factors act along with feedback loops and other regulation to produce systems that are robust to perturbations).
I concluded long ago I wasn't smart enough to understand some things, but by using ML, simulations, and statistics, I could augment my native intelligence and make sense of complex systems in biology. With mixed results- I don't think we're anywhere close to solving the generalized genotype to phenotype problem.
Sounds like "geoguesser" players who learn to recognize google street view pictures from a specific country by looking at the color of the google street view car or a specific piece of dirt on the camera lens.
Yeah, there's also an likely apocryphal story about tanks and machine learning: https://gwern.net/tank
The more you work with large-scale ML systems the more you develop an intuition for these kinds of properties. If you work a lot with debugging models and training data, or even just dimensionality reduction and matrix factorization, you begin to realize that many features are highly correlated with each other, often being close to scaled linear.