> since it didn’t actually go through real reasoning, just generated text that reads like reasoning steps.
Can you elaborate on the difference? Are you bringing sentience into it? It kind of sounds like it from "don't act on their own". But reasoning and sentience are wildly different things.
> It’s dangerous to put so much faith in what in reality is a very good guessing machine
Yes, exactly. That's why I think it is good we are supplementing fallible humans with fallible LLMs; we already have the processes in place to assume that not every actor is infallible.
So true. People who argue that we should not trust/use LLMs because they sometimes get it wrong are holding them to a higher standard than people -- we make mistakes too!
Do we blindly trust or believe every single thing we hear from another person? Of course not. But hearing what they have to say can still be fruitful, and it is not like we have an oracle at our disposal who always speaks the absolute truth, either. We make do with what we have, and LLMs are another tool we can use.
> Can you elaborate on the difference?
They’ll fail in different ways than something that thinks (and doesn’t have some kind of major disease of the brain going on) and often smack in the middle of appearing to think.