Google is also terribly paranoid of the LLM saying anything controversial. If you want a summary of some hot topic article you might not have the time to read, Gemini will straight up refuse to answer. ChatGPT and Grok don't mind at all.
I noticed the same in Gemini. It would refuse to answer mundane questions that none but the most 'enlightened' could find an offensive twist to.
This makes it rather unusable as a catch all goto resource, sadly. People are curious by nature. Refusing to answer their questions doesn't squash that, it leads them to potentially less trustworthy sources.
Trying to answer complex questions by making up shit in a confident voice is the worst option. Redirecting to a more trustworthy human source or multiple if needed is much better
I talk to ChatGPT about some controversial things, and it's pretty good at nuance and devils advocate if you ask for it. It's more echo chamber, if you don't, or rather extreme principle of charity, which might be a good thing.
Deepseek to circumvent Western censorship
Claude to circumvent Eastern censorship
Grok Unhinged for a wild time
> Refusing to answer their questions doesn't squash that, it leads them to potentially less trustworthy sources.
But that's good
For who?
For the reader.
The AI won't tell the reader what to think in an authoritative voice. This is better than the AI trying to decide what is true and what isn't.
However, the AI should be able to search the web and present it's findings without refusals. Obviously, always presenting the sources. And the AI should never use an authoritative tone and it should be transparent about the steps it took to gather the information, and present the sites and tracks it didn't follow.
I think that's the "trust" bit. In AI, trust generally means "let's not offend anyone and water it down to useless." Google is paranoid of being sued/getting attention if Gemini says something about Palestine or drawns images like Studio Ghibli. Meanwhile users love to these topics and memes are free marketing.
Not a fan of Google, but if you use Gemini through AI studio with a custom prompt and filters disabled it's by far the least censored commercial model in my experience.
Most of https://chirper.ai runs on Gemini 2.0 Flash Lite, and it has plenty of extremely NSFW content generated.
Less censored than Grok?
The single reason I will never ever be an user of them. Its a hill I will die on
I have the same experience in the web UI. Asking for that famous Obama chili recipe gets a refusal. But when I use the API, I can dial back the safety settings to the point where things work much more smoothly.
>Google is also terribly paranoid of the LLM saying anything controversial.
When did this start? Serious question. Of all the model providers my experience with Google's LLMs and Chatproducts were the worst in that dimension. Black Nazis, Eating stones, pizza with glue, etc I suppose we've all been there.
From day one. We would have had LLMs years before if Google wasn't holding back. They knew the risk - google search would be dead as soon as the internet were flooded with AI content that google could not distinguish from real content.
Then you could look at how the first "public preview" models they released were so neutered by their own inhibitions they were useless (to me). Things like over-active refusals in response to "killing child processes".
Try asking ChatGPT to solve a captcha for you ( character recognition in a foreign language ). AI studio doesn't refuse.
Seems like a feature. Last thing we need is a bunch of people willing to take AI at it's word making up shit about controversial topics. I'd say redirecting to good or prestigious source is probably the best you can do
I remember when LLM first appeared - on a local social website of my country (think Digg), a lot of people were exctatic because they got ChatGPT to say that black people are dumb, claiming it as a victory over woke :P