8f2ab37a-ed6c 9 days ago

Google is also terribly paranoid of the LLM saying anything controversial. If you want a summary of some hot topic article you might not have the time to read, Gemini will straight up refuse to answer. ChatGPT and Grok don't mind at all.

8
silisili 9 days ago

I noticed the same in Gemini. It would refuse to answer mundane questions that none but the most 'enlightened' could find an offensive twist to.

This makes it rather unusable as a catch all goto resource, sadly. People are curious by nature. Refusing to answer their questions doesn't squash that, it leads them to potentially less trustworthy sources.

rat87 8 days ago

Trying to answer complex questions by making up shit in a confident voice is the worst option. Redirecting to a more trustworthy human source or multiple if needed is much better

aeonik 8 days ago

I talk to ChatGPT about some controversial things, and it's pretty good at nuance and devils advocate if you ask for it. It's more echo chamber, if you don't, or rather extreme principle of charity, which might be a good thing.

yieldcrv 8 days ago

Deepseek to circumvent Western censorship

Claude to circumvent Eastern censorship

Grok Unhinged for a wild time

ranyume 8 days ago

> Refusing to answer their questions doesn't squash that, it leads them to potentially less trustworthy sources.

But that's good

thfuran 8 days ago

For who?

ranyume 8 days ago

For the reader.

The AI won't tell the reader what to think in an authoritative voice. This is better than the AI trying to decide what is true and what isn't.

However, the AI should be able to search the web and present it's findings without refusals. Obviously, always presenting the sources. And the AI should never use an authoritative tone and it should be transparent about the steps it took to gather the information, and present the sites and tracks it didn't follow.

LightBug1 8 days ago

Yes, Musk's contention of an AI trying to tell the truth, no matter what, is straight up horse manure. Should be done for false advertising (per usual)

thfuran 8 days ago

Elon Musk had been an endless stream of false advertising for years.

wegfawefgawefg 8 days ago

"If i never choose, I can never be wrong. Isnt that great?"

miohtama 8 days ago

I think that's the "trust" bit. In AI, trust generally means "let's not offend anyone and water it down to useless." Google is paranoid of being sued/getting attention if Gemini says something about Palestine or drawns images like Studio Ghibli. Meanwhile users love to these topics and memes are free marketing.

logicchains 9 days ago

Not a fan of Google, but if you use Gemini through AI studio with a custom prompt and filters disabled it's by far the least censored commercial model in my experience.

int_19h 8 days ago

Most of https://chirper.ai runs on Gemini 2.0 Flash Lite, and it has plenty of extremely NSFW content generated.

einsteinx2 8 days ago

Less censored than Grok?

nova22033 8 days ago

How many people use Grok for real work?

polski-g 7 days ago

I do. It is absolutely astounding for coding.

AznHisoka 9 days ago

The single reason I will never ever be an user of them. Its a hill I will die on

Breza 1 day ago

I have the same experience in the web UI. Asking for that famous Obama chili recipe gets a refusal. But when I use the API, I can dial back the safety settings to the point where things work much more smoothly.

jsemrau 9 days ago

>Google is also terribly paranoid of the LLM saying anything controversial.

When did this start? Serious question. Of all the model providers my experience with Google's LLMs and Chatproducts were the worst in that dimension. Black Nazis, Eating stones, pizza with glue, etc I suppose we've all been there.

bmcahren 7 days ago

From day one. We would have had LLMs years before if Google wasn't holding back. They knew the risk - google search would be dead as soon as the internet were flooded with AI content that google could not distinguish from real content.

Then you could look at how the first "public preview" models they released were so neutered by their own inhibitions they were useless (to me). Things like over-active refusals in response to "killing child processes".

rahidz 8 days ago

The ghost of Tay still haunts every AI company.

rat87 8 days ago

As it should. The potential for harm from LLMs is significant and they should be aware of that

dorgo 7 days ago

Try asking ChatGPT to solve a captcha for you ( character recognition in a foreign language ). AI studio doesn't refuse.

rat87 8 days ago

Seems like a feature. Last thing we need is a bunch of people willing to take AI at it's word making up shit about controversial topics. I'd say redirecting to good or prestigious source is probably the best you can do

StefanBatory 8 days ago

I remember when LLM first appeared - on a local social website of my country (think Digg), a lot of people were exctatic because they got ChatGPT to say that black people are dumb, claiming it as a victory over woke :P