Feels like bad faith to say that knowing full well that
1. This would also be a massive legal headache,
2. It would become impossibly expensive
3. We obviously wouldn't have the AI we have today, which is an incredible technology (if immature) if this happened. Instead the growth of AI would have been strangled by rights holders wanting infinity money because they know once their data is in that model, they aren't getting it back, ever—it's a one-time sale.
I'm of the opinion that AI is and will continue to be a net positive for society. So I see this as essentially saying "let's go an remove this and delay the development of it by 10-20 years and ensure people can't train and run their own models feasibly for a lot longer because only big companies can afford real training datasets."
Why not simply make your counterargument rather than accusing GP of being in bad faith? Your argument seems to be that it's fine to break the law if the net outcome for society is positive. It's not "bad faith" to disagree with that.
But they didn't break the law. The NYT articles were not algorithms/AI.
It's bad faith because they are saying "well, they should have done [unreasonable thing]". I explored their version of things from my perspective (it's not possible) and from a conciliatory perspective (okay, let's say they somehow try to navigate that hurdle anyways, is society better off? Why do I think it's infeasible?)
If they didn't break the law, your pragmatic point about outcomes is irrelevant. Open AI is in the clear regardless of whether they are doing something great or something useless. So I don't honestly know what you're trying to say. I'm not sure why getting licenses to IP you want to use is unreasonable, it happens all the time.
Edit: Authors Guild, Inc. v. Google, Inc. is a great example of a case where a tech giant tried to legally get the rights to use a whole bunch of copyrighted content (~all books ever published), but failed. The net result was they had to completely shut off access to most of the Google Books corpus, even though it would have been (IMO) a net benefit to society if they had been able to do what they wanted.
> Your argument seems to be that it's fine to break the law if the net outcome for society is positive.
In any other context, this would be known as "civil disobediance". It's generally considered something to applaud.
For what it's worth, I haven't made up my mind about the current state of AI. I haven't yet seen an ability for the systems to perform abstract reasoning, to _actually_ learn. (Show me an AI that has been fed with nothing but examples in languages A and B. Then demonstrate, conclusively, that it can apply the lessons it has learned in language M, which happens to be nothing like the first two.)
> In any other context, this would be known as "civil disobediance". It's generally considered something to applaud.
No, civil disobedience is when you break the law expecting to be punished, to force society to confront the evil of the law. The point is that you get publicly arrested, possibly get beaten, get thrown in jail. This is not at all like what Open AI is doing.
>I'm of the opinion that AI is and will continue to be a net positive for society. So I see this as essentially saying "let's go an remove this and delay the development of it by 10-20 years and ensure people can't train and run their own models feasibly for a lot longer because only big companies can afford real training datasets."
Absolutely. Which, presumably, means that you're fine with the argument that your DNA (and that of each member of your family) could provide huge benefits to medicine and potentially save millions of lives.
But significant research will be required to make that happen. As such, we will be requiring (with no opt outs allowed) you and your whole family to provide blood, sperm and ova samples weekly until that research pays off. You will receive no compensation or other considerations other than the knowledge that you're moving the technology forward.
May we assume you're fine with that?