Why do you think OpenAI is more trustworthy than Google?
For me it’s less about trustworthiness and more about what they can do with the information. Google can potentially locate, profile and influence everyone around me and I don’t want that type of power however benevolent they are.
What can OpenAI do? They can sell my data, whatever, it’s a whole bunch of prompts of me asking for function and API syntax.
Do you think Google doesn't sell that data, or that other companies don't collect and resell it?
In either case, I'm sure that's how it starts. "This company has very little power and influence; what damage can they do?"
Until, oh so suddenly, they're tracking and profiling you and selling that data.
I agree with GP. The reason is simple, business model.
Google’s main source of income, by far, is selling ads. Not just any ads but highly targeted ones, which means global digital surveillance is an essential part of their business model.
OpenAI doesn't have a business model. They sell dollars for 75 cents. If push comes to shove, they will sell your data to make ends meet. What about OpenAI screams stability and trust? Is it all their leadership leaving after countless cases of drama? Is it a CEO that oozes snake oil?
> OpenAI doesn't have a business model
It seems their revenue in 2024 exceeded $3B.
> they will sell your data to make ends meet
I’m not sure they can do that without breaching the contract. My employer pays for ChatGPT enterprise I use.
Another thing, OpenAI has very small amount of my data because they only have the stuff I entered to their web service. Google on the other hand tracks people across half of the internets, because half of the web pages contain ads served by google. Too bad antimonopoly regulators were asleep on their job when google acquired DoubleClick, AdMob, and the rest of them.
> It seems their revenue in 2024 exceeded $3B.
With a loss of $5B. A viable business model needs more than revenue. It also needs profit.
It is not unusual for a business with visions for profitability to accept losses for a while to get there, but OpenAI does not seem to have such vision. They seem to be working off the old tech model of "If we get enough users we'll eventually figure something out" – which every other time we've heard that has ended up meaning selling user data.
Maybe this time will be different, but every time we hear that...
Mainly because I see moral alignment and I see Sam Altman as a person of good moral standing. I don’t see any perception of morality from Google, just a faceless mega corporation.
That is troubling. What happens if he leaves then?
Cult of personality is blinding. But I could be wrong in my interpretation. Would you be able to put what that moral standing is about in prose without engaging in public figure examples?
Simply put Google has had more time to develop a terrible data hoarding reputation.
Isn't hoarding data for training purposes a key part of OpenAI's business model? I get that they don't have a reputation for selling that data (or access to it) yet, but, what happens if/when funding dries up?
I definitely don't trust Google -- fool me once, and all -- but to the extent I'm going to "trust" any business with my data, I'd like to see a proven business model that isn't based on monetizing my information and is likely to continue to work (e.g., Apple). OpenAI doesn't have that.
I don't think it's about trusting OpenAI necessarily, and definitely not a character like Sam Altman. It's more about Google having a proven record of being data obsessed. 99% of the money they make is from our data. Many other tech giants (Apple, Microsoft, etc) are also hard to trust, but at least they don't have their whole business model built on user data like Google and Meta. I can't blame anyone looking at OpenAI as a lesser evil.