> Yes, most people who have an incentive in pushing AI say that hallucinations aren't a problem, since humans aren't correct all the time.
We have legal and social mechanisms in place for the way humans are incorrect. LLMs are incorrect in new ways that our legal and social systems are less prepared to handle.
If a support human lies about a change to policy, the human is fired and management communicates about the rogue actor, the unchanged policy, and how the issue has been handled.
How do you address an AI doing the same thing without removing the AI from your support system?
Still fine the company who uses the AI? It cannot be prevented with the current state of AIs so you will need a disclaimer and, if a user cancels, show the latest support chats in the crm for that user so you can add a human in the mix.