... and they are picked out in a conversation. As the conversant who is supposedly "less Human". TBH, that suggests some flaw either in the test or in people's presumptions regarding how humans behave.
> that suggests some flaw either in the test or in people's presumptions regarding how humans behave
Both. The Turing test is silly because it tests people's prejudices and presuppositions about machines, not objectively the machines themselves.
Also people's presumptions will quickly change as we get used to LLM output and we'll start detecting LLM speech with greater precision.