Sharlin 6 days ago

Trying to appropriate perfectly well generalizable terms as "something that only humans do" brings zero value to a conversation. It's a "god in the gaps" argument, essentially, and we don't exactly have a great track record of correctly identifying things that are uniquely human.

1
fao_ 6 days ago

There's very literally currently a whole wealth of papers proving that LLMs do not understand, cannot reason, and cannot perform basic kinds of reasoning that even a dog can perform. But, ok.

wizzwizz4 6 days ago

There's a whole wealth of papers proving that LLMs do not understand the concepts they write about. That doesn't mean they don't understand grammar – which (as I've claimed since the GPT-2 days) we should, theoretically, expect them to "understand". And what is chess, but a particularly sophisticated grammar?

TeMPOraL 6 days ago

There's very literally currently a whole wealth of papers proving the opposite, too, so ¯\_(ツ)_/¯.