I noticed the same in Gemini. It would refuse to answer mundane questions that none but the most 'enlightened' could find an offensive twist to.
This makes it rather unusable as a catch all goto resource, sadly. People are curious by nature. Refusing to answer their questions doesn't squash that, it leads them to potentially less trustworthy sources.
Trying to answer complex questions by making up shit in a confident voice is the worst option. Redirecting to a more trustworthy human source or multiple if needed is much better
I talk to ChatGPT about some controversial things, and it's pretty good at nuance and devils advocate if you ask for it. It's more echo chamber, if you don't, or rather extreme principle of charity, which might be a good thing.
Deepseek to circumvent Western censorship
Claude to circumvent Eastern censorship
Grok Unhinged for a wild time
> Refusing to answer their questions doesn't squash that, it leads them to potentially less trustworthy sources.
But that's good
For who?
For the reader.
The AI won't tell the reader what to think in an authoritative voice. This is better than the AI trying to decide what is true and what isn't.
However, the AI should be able to search the web and present it's findings without refusals. Obviously, always presenting the sources. And the AI should never use an authoritative tone and it should be transparent about the steps it took to gather the information, and present the sites and tracks it didn't follow.