> even striving for explainable/understandable systems
It's been almost 6-8000 years since the advent of writing and we still cannot explain or understand human intelligence and yet we expect to be able to understand a machine that is close to or surpasses human intelligence? Isn't the premise fundamentally flawed?
I think I'd remain interested in more conclusive proof one way or the other, since by your logic everything that's currently unknown is unknowable.
Regardless of whether the project of explainable / understandable succeeds though, everyone should agree it's a worthy goal. Unless you like the idea of stock-markets, resource planning for cities and whole societies under the control of technology that's literally indistinguishable from oracles speaking to a whispering wind. I'd prefer someone else is able to hear/understand/check their math or their arguments. Speaking of 6-8000 years since something happened, oracles and mystical crap like that should be forgotten relics of a bygone era rather than an explicit goal for the future
It is actually incredibly silly to expect full explain ability as a goal because any system sufficiently intelligent to do basic arithmetic will have behavior that is inexplicable.