The AI pseudoagent[1], unless sentient and proficient in the chosen field of expertise, is not a peer. It's just a simulacrum of one. As such, it can only manifest simulacra of concepts such as "biases", "fairness", "accountability", etc.
The way I see it, it can function, at best, as a tool of text analysis, e. g. as part of augmented analytics engines in a CAQDAS.
1. Agents are defined as having agency, with sentience as an obvious prerequisite.
Do you really thing of sentience being a prerequisite for agency? That doesn't seem to follow. Plants are agents in a fairly meaningful sense and yet I doubt they are sentient. I mean, AI's accept information and can be made to make decisions and act in the world. That seems like a very reasonable definition of agency to me.
> Do you really thing of sentience being a prerequisite for agency?
Yes, I do. :)
> [...] and can be made to make decisions and act in the world.
"Have to be made". Then again, this is not just about agency but about peership.
We honestly didn’t think much about the term “AI peer reviewer” and didn’t mean to imply it’s equivalent to human peer review. We’ll stick to using “AI reviewer” going forward.