I wonder if there's room in using AI to gather past edits of someone, as part of vetting, and use the sentiment analysis to check how neutral their biases are.
Neutrality != necessarily accurate or useful. And the most neutral thing to say is nothing.
And most LLMs probably have Wikipedia as a significant part of their training corpus, so there is a big ouroboros issue too.