Humans anthropocize all sorts of things but there are way bigger consequences for treating current AI like a human than someone anthropocizing their dog.
I know plenty of people that believe LLMs think and reason the same way as humans do and it leads them to make bad choices. I'm really careful about the language I use around such people because we understand expressions like, "the AI thought this" very differently.
>Humans anthropocize all sorts of things but there are way bigger consequences for treating current AI like a human than someone anthropocizing their dog.
AI is less human-like than a dog, in the sense that an AI (hopefully!) is not capable of experiencing suffering.
AI is also more human-like than a dog; in the sense that, unlike a dog, an AI can apply political power.
I agree that there are considerable consequences for misconstruing the nature of things, especially when there's power involved.
>I know plenty of people that believe LLMs think and reason the same way as humans do and it leads them to make bad choices.
They're not completely wrong in their belief. It's just that you are able, thanks to your specialized training, to automatically make a particular distinction, for which most people simply have no basis for comparison. I agree that it's a very important distinction; I could also guess that even when you do your best to explain it to people, often they prove unable to grasp its nature, or its importance. Right?
See, everyone's trying to make sense of what's going on in their lives on the basis of whatever knowledge and conditioning they might have. Everyone gets it right some of the time and wrong most of the time. For example, humans also make bad choices as a result of misinterpreting other humans. Or by correctly interpreting and trusting other humans who happen to be wrong. There's nothing new about that. Nor is there a particular difference between suffering the consequences of AI-driven bad choice vs those of human-driven bad choice. In both cases, you're a human experiencing negative consequences.
AI stupidity is simply human stupidity distilled. If humans were to only ever speak logically correct statements in an unambiguous language, that's what an LLM's training data would contain, and in turn the acceptance criterion ("Turing test") for LLMs would be outputting other unambiguously correct statements.
However, it's 2025 and most humans don't actually reason, they vibe with the pulsations of the information medium. Give us something that looks remotely plausible and authoritative, and we'll readily consider it more valid than our own immediate thoughts and perceptions - or those of another human being.
That's what media did to us, not AI. It's been working its magic for at least a century, because humans aren't anywhere near rational creatures; we're sloppy. We don't have to be; we are able to teach ourselves a tiny bit of pure thought. Thankfully, we have a tool for when we want to constrain ourselves to only thinking in logically correct statements, and only expressing those things which unambiguously make sense: it's called programming.
Up to this point, learning how to reason was economically necessary, in order to be able to command computers. With LLMs becoming better, I fear thinking might be relegated to an entirely academic pursuit.