merksittich 2 days ago

Interesting detail from the court order [0]: When asked by the judge if they could anonymize chat logs instead of deleting them, OpenAI's response effectively dodged the "how" and focused on "privacy laws mandate deletion." This implicitly admits they don't have a reliable method to sufficiently anonymize data to satisfy those privacy concerns.

This raises serious questions about the supposed "anonymization" of chat data used for training their new models, i.e. when users leave the "improve model for all users" toggle enabled in the settings (which is the default even for paying users). So, indeed, very bad for the current business model which appears to rely on present users (voluntarily) "feeding the machine" to improve it.

[0] https://cdn.arstechnica.net/wp-content/uploads/2025/06/NYT-v...

2
Kon-Peki 1 day ago

Thank you for the link to the actual text!

So, the NYT asked for this back in January and the court said no, but asked OpenAI if there was a way to accomplish the preservation goal in a privacy-preserving manner. OpenAI refused to engage for 5 f’ing months. The court said “fine, the NYT gets what they originally asked for”.

Nice job guys.

noworriesnate 1 day ago

Nice find! Maybe this is a ploy by OpenAI to use API requests for training while blaming the courts?

blackqueeriroh 1 day ago

That’s not an implicit admission, it’s refusing to argue something they don’t want to do.