In your arguments for privacy, do you consider privacy from OpenAI?
Cut a joke about ethics and OpenAI
Original comment, lest the conversation chain does not make sense
> Sam Altman is the most ethical man I have ever seen in IT. You cannot doubt he is vouching and fighting for your privacy. Especially on YCombinator website where free speech is guaranteed.
He is what now?! That is a risible claim.
He was being facetious.
I fail to see how saving all logs advances that cause
Because this is SOP in any judicial case?
Openly destroying evidence isn’t usually accepted by courts.
Is there any evidence of import that would only be found in one single log among billions? The fact that NYT thinks that merely sampling 1% of logs would not support their case is pretty damning.
I don't know anything about this case but it has been alleged that OpenAI products can be coaxed to return verbatim chunks of NYT content.
Sure, but if that is true, what is the evidentiary difference between preserving 10 billion conversations and preserving 100,000 and using sampling and statistics to measure harm?
The main differences seem to be that it doesn't require the precise form of the queries to be known a priori and that it interferes with the routine destruction of evidence via maliciously-compliant mealy-mouthed word games, for which the tech sector has developed a significant reputation.
Furthermore there is no conceivable harm resulting from requiring evidence to be preserved for an active trial. Find a better framing.
No conceivable harm in what sense? It seems obvious that it is harmful for a user who requests and is granted privacy to then have their private messages delivered to NYT. Legally it may be on shakier ground from the individual's perspective, but OpenAI argues that the harm is to their relationship with their customers and various governments, as well as the cost of the implementation effort:
>For OpenAI, risks of breaching its own privacy agreements could not only "damage" relationships with users but could also risk putting the company in breach of contracts and global privacy regulations. Further, the order imposes "significant" burdens on OpenAI, supposedly forcing the ChatGPT maker to dedicate months of engineering hours at substantial costs to comply, OpenAI claimed. It follows then that OpenAI's potential for harm "far outweighs News Plaintiffs’ speculative need for such data," OpenAI argued.
>> It seems obvious that it is harmful for a user who requests and is granted privacy to then have their private messages delivered to NYT.
This ruling is about preservation of evidence, not (yet) about delivering that information to one of the parties.
If judges couldn't compel parties to preserve evidence in active cases, you could see pretty easily that parties would aggressively destroy evidence that might be harmful to them at trial.
There's a whole later process (and probably arguments in front of the judge) about which evidence is actually delivered, whether it goes to the NYT or just to their lawyers, how much of it is redacted or anonymized, etc.
The number of times that OpenAI is producing verbatim copies of NYT's articles... for one. That wasn't so hard to think of.