Except that technology is on the side of anonymity this time. LLMs can provide a pretty solid defense against such attacks — just ask ChatGPT to rewrite your message in a random writer's style. The issue is that you'll end up sounding like an LLM, but hey, tradeoffs.
Using throwaways whenever possible mitigates a lot of the risk, too.
That’s true. The old security versus convenience hack.
But if i were a government agency I would be pressing AI providers for data, or fingerprinting the output with punctuation/whitespace or something more subtle.
Tho i guess with open models that people can run on device that’s mitigated a lot.