I mean to be fair, I like that they're putting their money where their mouth is so to speak - if you want to sell a product based on the idea that AI can handle complex tasks, you should probably have AI doing what should be simple, frontline support.
> you should probably have AI doing what should be simple, frontline support.
AI companies are going to prove (to the market or to the actual people using their products) that a bunch of "simple" problems aren't at all simple and have been undervalue for a long time.
Such as support.
I don't agree with that at all. Hallucination is a very well known issue. Sure leverage AI to improve their productivity.. but not even having a human look over the responses shows they don't care about their customers
If you had a human support person feeding the support question into the AI to get a hint, do you think that support person is going to know that the AI response is made up and not actually a correct answer? If they knew the correct answer, they wouldn't have needed to ask the AI.
Exactly, that's why my startup recommends all LLM outputs should come with trustworthiness scores:
The number of times real human powered support caused me massive headache and sometimes financial damage and the number of times my lawyer fixed those because me trying to explain why they were wrong… I am not surprised that AI will do the same as the creation is the image of the creator and all that.
> if you want to sell a product based on the idea that AI can handle complex tasks, you should probably have AI doing what should be simple, frontline support.
That would only be true if you were correct that your AI can handle complex tasks. If you want to sell dowsing rods, you probably don't want to structure your own company to rely on the rods.