levocardia 9 days ago

Google is winning on every front except... marketing (Google has a chatbot?), trust (who knew the founding fathers were so diverse?), safety (where's the 2.5 Pro model card?), market share (fully one in ten internet users on the planet are weekly ChatGPT users), and, well, vibes (who's rooting for big G, exactly?).

But I will admit, Gemini Pro 2.5 is a legit good model. So, hats off for that.

17
a2128 8 days ago

My experience with their software has been horrible. A friend was messing around with Gemini on my phone and said my name is John, and it automatically saved that to my saved info list and always called me John from then on. But when I ask it to forget this, it says it can't do that automatically and links me to the Saved Info page, which is a menu they didn't implement in the app so it opens a URL in my browser and asks me to sign into my Google account again. Then a little toast says "Something went wrong" and the saved info list is empty and broken. I tried reporting this issue half a year ago and it's still unresolved. Actually the only way I was ever able to get it to stop calling me John is to say "remember to forget my name is John" in some way that it adds that to the list instead of linking me to that broken page

poutrathor 6 days ago

"Hello John, I notice your username on HN has not been updated. I will make that change for you in 2 hours, from a2128 to john2128. If you want to keep your current username, please follow steps in our help me discord channel"

wayeq 7 days ago

how's your day going, John?

sukit 3 days ago

Thank you, John.

8f2ab37a-ed6c 9 days ago

Google is also terribly paranoid of the LLM saying anything controversial. If you want a summary of some hot topic article you might not have the time to read, Gemini will straight up refuse to answer. ChatGPT and Grok don't mind at all.

silisili 9 days ago

I noticed the same in Gemini. It would refuse to answer mundane questions that none but the most 'enlightened' could find an offensive twist to.

This makes it rather unusable as a catch all goto resource, sadly. People are curious by nature. Refusing to answer their questions doesn't squash that, it leads them to potentially less trustworthy sources.

rat87 8 days ago

Trying to answer complex questions by making up shit in a confident voice is the worst option. Redirecting to a more trustworthy human source or multiple if needed is much better

aeonik 8 days ago

I talk to ChatGPT about some controversial things, and it's pretty good at nuance and devils advocate if you ask for it. It's more echo chamber, if you don't, or rather extreme principle of charity, which might be a good thing.

yieldcrv 8 days ago

Deepseek to circumvent Western censorship

Claude to circumvent Eastern censorship

Grok Unhinged for a wild time

ranyume 8 days ago

> Refusing to answer their questions doesn't squash that, it leads them to potentially less trustworthy sources.

But that's good

thfuran 8 days ago

For who?

ranyume 8 days ago

For the reader.

The AI won't tell the reader what to think in an authoritative voice. This is better than the AI trying to decide what is true and what isn't.

However, the AI should be able to search the web and present it's findings without refusals. Obviously, always presenting the sources. And the AI should never use an authoritative tone and it should be transparent about the steps it took to gather the information, and present the sites and tracks it didn't follow.

LightBug1 8 days ago

Yes, Musk's contention of an AI trying to tell the truth, no matter what, is straight up horse manure. Should be done for false advertising (per usual)

thfuran 8 days ago

Elon Musk had been an endless stream of false advertising for years.

wegfawefgawefg 8 days ago

"If i never choose, I can never be wrong. Isnt that great?"

miohtama 8 days ago

I think that's the "trust" bit. In AI, trust generally means "let's not offend anyone and water it down to useless." Google is paranoid of being sued/getting attention if Gemini says something about Palestine or drawns images like Studio Ghibli. Meanwhile users love to these topics and memes are free marketing.

logicchains 9 days ago

Not a fan of Google, but if you use Gemini through AI studio with a custom prompt and filters disabled it's by far the least censored commercial model in my experience.

int_19h 8 days ago

Most of https://chirper.ai runs on Gemini 2.0 Flash Lite, and it has plenty of extremely NSFW content generated.

einsteinx2 8 days ago

Less censored than Grok?

nova22033 8 days ago

How many people use Grok for real work?

polski-g 7 days ago

I do. It is absolutely astounding for coding.

AznHisoka 9 days ago

The single reason I will never ever be an user of them. Its a hill I will die on

Breza 1 day ago

I have the same experience in the web UI. Asking for that famous Obama chili recipe gets a refusal. But when I use the API, I can dial back the safety settings to the point where things work much more smoothly.

jsemrau 8 days ago

>Google is also terribly paranoid of the LLM saying anything controversial.

When did this start? Serious question. Of all the model providers my experience with Google's LLMs and Chatproducts were the worst in that dimension. Black Nazis, Eating stones, pizza with glue, etc I suppose we've all been there.

bmcahren 7 days ago

From day one. We would have had LLMs years before if Google wasn't holding back. They knew the risk - google search would be dead as soon as the internet were flooded with AI content that google could not distinguish from real content.

Then you could look at how the first "public preview" models they released were so neutered by their own inhibitions they were useless (to me). Things like over-active refusals in response to "killing child processes".

rahidz 8 days ago

The ghost of Tay still haunts every AI company.

rat87 8 days ago

As it should. The potential for harm from LLMs is significant and they should be aware of that

dorgo 7 days ago

Try asking ChatGPT to solve a captcha for you ( character recognition in a foreign language ). AI studio doesn't refuse.

rat87 8 days ago

Seems like a feature. Last thing we need is a bunch of people willing to take AI at it's word making up shit about controversial topics. I'd say redirecting to good or prestigious source is probably the best you can do

StefanBatory 8 days ago

I remember when LLM first appeared - on a local social website of my country (think Digg), a lot of people were exctatic because they got ChatGPT to say that black people are dumb, claiming it as a victory over woke :P

rzz3 9 days ago

You really hit the nail on the head with trust. Knowing the power of these AIs and how absolutely little I trust Google, I’d never tell trust Gemini with the things I’ll say to ChatGPT.

crazygringo 8 days ago

That's curious.

Large corporations wind up creating internal policies, controls, etc. If you know anyone who works in engineering at Google, you'll find out about the privacy and security reviews required in launching code.

Startups, on the other hand, are the wild west. One policy one day, another the next, engineers are doing things that don't follow either policy, the CEO is selling data, and then they run out of money and sell all the data to god knows who.

Google is pretty stable. OpenAI, on the other hand, has been mega-drama you could make a movie out of. Who knows what it's going to be doing with data two or four years from now?

rzz3 2 days ago

I acknowledge that it’s more of a perception issue than anything, but my _perception_ is that I don’t trust Google as far as I can throw it.

c0ndu17 6 days ago

Alternatively, who would you trust with your data, an Ad company run by a McKinsey executive, or an NPO with a direct revenue stream partnered with Apple?

ysofunny 8 days ago

cue the openAI movie

same pattern as Mark Zuckerberg's movie.

philsnow 9 days ago

> how absolutely little I trust Google, I’d never tell trust Gemini with the things I’ll say to ChatGPT.

Are you pretty sure that Google won't eventually buy OpenAI and thus learn everything you've said to ChatGPT?

dunefox 8 days ago

It's not about the information, but the connection to all Google services.

squigz 8 days ago

Why do you think OpenAI is more trustworthy than Google?

gessha 8 days ago

For me it’s less about trustworthiness and more about what they can do with the information. Google can potentially locate, profile and influence everyone around me and I don’t want that type of power however benevolent they are.

What can OpenAI do? They can sell my data, whatever, it’s a whole bunch of prompts of me asking for function and API syntax.

squigz 8 days ago

Do you think Google doesn't sell that data, or that other companies don't collect and resell it?

In either case, I'm sure that's how it starts. "This company has very little power and influence; what damage can they do?"

Until, oh so suddenly, they're tracking and profiling you and selling that data.

gessha 8 days ago

Based on some friends in Google, I don’t think they explicitly sell it but live ad auction is something I’m wary of.

Also, it’s less about what they currently do but what they’re capable of. A Cold War of privacy of sorts.

pb7 7 days ago

Google doesn't sell data.

Const-me 8 days ago

I agree with GP. The reason is simple, business model.

Google’s main source of income, by far, is selling ads. Not just any ads but highly targeted ones, which means global digital surveillance is an essential part of their business model.

pb7 7 days ago

OpenAI doesn't have a business model. They sell dollars for 75 cents. If push comes to shove, they will sell your data to make ends meet. What about OpenAI screams stability and trust? Is it all their leadership leaving after countless cases of drama? Is it a CEO that oozes snake oil?

Const-me 7 days ago

> OpenAI doesn't have a business model

It seems their revenue in 2024 exceeded $3B.

> they will sell your data to make ends meet

I’m not sure they can do that without breaching the contract. My employer pays for ChatGPT enterprise I use.

Another thing, OpenAI has very small amount of my data because they only have the stuff I entered to their web service. Google on the other hand tracks people across half of the internets, because half of the web pages contain ads served by google. Too bad antimonopoly regulators were asleep on their job when google acquired DoubleClick, AdMob, and the rest of them.

9rx 6 days ago

> It seems their revenue in 2024 exceeded $3B.

With a loss of $5B. A viable business model needs more than revenue. It also needs profit.

It is not unusual for a business with visions for profitability to accept losses for a while to get there, but OpenAI does not seem to have such vision. They seem to be working off the old tech model of "If we get enough users we'll eventually figure something out" – which every other time we've heard that has ended up meaning selling user data.

Maybe this time will be different, but every time we hear that...

rzz3 2 days ago

Mainly because I see moral alignment and I see Sam Altman as a person of good moral standing. I don’t see any perception of morality from Google, just a faceless mega corporation.

alganet 2 days ago

That is troubling. What happens if he leaves then?

Cult of personality is blinding. But I could be wrong in my interpretation. Would you be able to put what that moral standing is about in prose without engaging in public figure examples?

alternatex 8 days ago

Simply put Google has had more time to develop a terrible data hoarding reputation.

marcusb 8 days ago

Isn't hoarding data for training purposes a key part of OpenAI's business model? I get that they don't have a reputation for selling that data (or access to it) yet, but, what happens if/when funding dries up?

I definitely don't trust Google -- fool me once, and all -- but to the extent I'm going to "trust" any business with my data, I'd like to see a proven business model that isn't based on monetizing my information and is likely to continue to work (e.g., Apple). OpenAI doesn't have that.

alternatex 8 days ago

I don't think it's about trusting OpenAI necessarily, and definitely not a character like Sam Altman. It's more about Google having a proven record of being data obsessed. 99% of the money they make is from our data. Many other tech giants (Apple, Microsoft, etc) are also hard to trust, but at least they don't have their whole business model built on user data like Google and Meta. I can't blame anyone looking at OpenAI as a lesser evil.

sigmoid10 9 days ago

I wouldn't even say Gemini Pro 2.5 is the best model. Certainly not when you do multimodal or function calling, which is what actually matters in industry applications. Plain chatbots are nice, but I don't think they will decide who wins the race. Google is also no longer in the mindset to really innovate. You'll hear surprisingly similar POVs from ex-Googlers and ex-OpenAI guys. I'd actually say OpenAI still has an edge in terms of culture, even through it fell deep.

int_19h 8 days ago

I did some experiments with Gemini Pro 2.5 vs Sonnet 3.7 for coding assistants, and, at least as far as code quality and ability to understand complexities in existing codebase goes, Gemini is noticeably stronger.

tgsovlerkhgsel 9 days ago

> Certainly not when you do multimodal or function calling

Who is? (Genuine question, it's hard to keep up given how quickly the field moves.)

stavros 8 days ago

If you want an LLM to interface with other systems, function calling is absolutely essential.

mjirv 8 days ago

Claude 3.7 Sonnet for function calling, and it’s not particularly close in my experience.

Not sure about multimodal as it’s not what I work on.

mark_l_watson 8 days ago

I have found function calling and ‘roll my own agents’ work much better now with Gemini than they did late last year, but I also do a lot of function calling experiments with small open models using Ollama - much more difficult to work with to get a robust system.

PunchTornado 8 days ago

really? all of my friends and everyone I know actually hates openai. they managed to be the bad guy in AI.

joshdavham 8 days ago

My hesitancy to adopt Gemini, despite being a heavy GCP and workspace user, is I kinda lost trust when trying to use their earlier models (I don't even remember those models' names). I just remember the models were just so consistently bad and obviously hallucinated more than 50% of the time.

Maybe Gemini is finally better, but I'm not exactly excited to give it a try.

khimaros 7 days ago

it is a completely different product these days

bjackman 8 days ago

Well, Google is also very well placed to integrate with other products that have big market share.

So far this has been nothing but a PM wankfest but if Gemini-in-{Gmail,Meet,Docs,etc} actually gets useful, it could be a big deal.

I also don't think any of those concerns are as important for API users as direct consumers. I think that's gonna be a bugger part of my the market as time goes on.

rs186 8 days ago

Microsoft has been integrated Copilot in their Office products. In fact, they don't even call it Office any more. Guess what? If you ever had first hand experience with them, they are absolutely a dumpster fire. (Well, maybe except transcription in Teams meeting, but that's about it.) I used it for 5 minutes and never touch it again. I'll be very impressed if that's not the case with Google.

rs186 8 days ago

Exactly. Google may have a lead in their model, but saying they are "winning on every front" is a very questionable claim, from the perspective of everyday users, not influencers, devoted fans or anyone else who has a stake in hyping it.

mark_l_watson 8 days ago

I look more to Google for efficient and inexpensive LLM APIs, and in a similar way to Groq Cloud for inexpensive and fast inferencing for open models.

ChatGPT has a nice consumer product, and I also like it.

Google gets a bad rap on privacy, etc., but if you read the documentation and set privacy settings, etc. then I find them reasonable. (I read OpenAI’s privacy docs for a long while before experimenting with their integration of Mac terminal, VSCode, and IntelliJ products.)

We live in a cornucopia of AI tools. Occasionally I will just for the hell of it do all my research work for several days just using open models running on my Mac using Ollama - I notice a slight hit in productivity, but still a good setup.

Something for everyone!

culopatin 8 days ago

I had to stop using Gemini 2.5 because the UI peaks my MPB cpu at max and I can’t type my prompt at more than a character every 2 seconds. I can’t even delete my chats lol. Anyone else?

jsk2600 8 days ago

On deleting chats, I accidentally discovered that AI Studio creates a 'Google AI Studio' folder on your Google Drive with all the links to chats. If you delete the 'link' from there, it will disappear in AI Studio...interesting UX :-)

sublimefire 9 days ago

It might be worth throwing in an analogy to windows PCs vs Mac vs Linux. G appeals to a subset of the market at the end of the day, being “best” does not mean everyone will use it.

hermitShell 8 days ago

I would like to think they just let other companies have the first mover advantage on chatbots because it only disrupts Google in their search business, which was already pretty far gone and on the way out. Where is AI actually going to change the world? Protein folding, robotics, stuff that the public doesn’t hype about. And they looked at the gold rush and decided “let’s design shovels”. Maybe I’m giving them too much credit but very bullish on Google.

torginus 9 days ago

Didn't GCP manage to lose from this position of strength? I'm not sure even if they're the third biggest

sidibe 8 days ago

They "lost from a position of strength" in that they had they had the most public-cloud like data centers and started thinking about selling that later than they should have. Bard/Gemini started later than chatgpt , but there's not really a moat for this LLM stuff, and Google started moving a lot earlier relative to GCP vs Amazon.

They've got the cash, the people, and the infrastructure to do things faster than the others going forward, which is a much bigger deal IMO than having millions more users right now. Most people still aren't using LLMs that often, switching is easy, and Google has the most obvious entry points with billion+ users with google.com, YouTube, gmail, chrome, android, etc.

donny2018 8 days ago

They were well positioned for cloud business long before AWS and Azure, but they still managed to lose this battle.

Google can be good on the technological side of things, but we saw time and time again that, other than ads, Google is just not good at business.

ACCount36 8 days ago

Trust is important, and Google has a big rep for killing its projects. As well as making the most moronic braindead decisions in handling what they don't kill off.

No one is going to build on top of anything "Google" without having a way out thought out in advance.

Not that important for LLMs, where drop-in replacements are usually available. But a lot of people just hear "by Google" now and think "thanks I'll pass" - and who can blame them?

jimbob45 8 days ago

I’m scared they’re going to kill it off. Every good idea they’ve had in the last 20 years has been killed off. Even Fuchsia/Zircon, which should have supplanted Android a full decade ago.

killerstorm 8 days ago

Winning =/= won. The point is that they are improving on many fronts. If they were already recognized as THE leader there would be no point in making a HN post about it.

tbolt 8 days ago

Add to this list apps. As in ChatGPT and Anthropic have nice desktop software applications for Mac and Windows.

karunamurti 7 days ago

Also not OSS. That's not a win for me.