swframe2 10 days ago

What do you think of the recent anthropic research on how LLMs reason? It is clear from their shallow analysis that LLMs have very serious reasoning weaknesses. I wonder if those weaknesses can be "addressed" if we build LLMs that can do deeper analysis and use RL to self-improve. LLMs improving LLMs would be a very impressive step towards AGI.

1
cperkins 9 days ago

I think we are careless in how we use terms. We often say "intelligence" where me mean "sentience". We have studied intelligence for a long time and we have IQ tests that can measure it. The various LLMs (like Chat GPT and Gemini) are scoring pretty well on the IQ tests. So given that, I think we can conclude that they are intelligent, as we can measure it.

But while we have measurements for "intelligence" we don't for "sentience", "agency", "consciousness" or these other things. And I'd argue that there are lots of intelligent life on earth (take crows as an example) that are sentient to a degree that the LLMs are not. My guess is this is because of their "agency" - their drive for survival. The LLMs we have now are clearly smarter than crows and cats but not sentient in the way those animals are. So I think it's safe to say that "sentience" (whatever that is) is not an emergent property of neural net/training data size. If it were, it'd be evident already.

So Gemini/Chat GPT seem to be "intelligence", but in tool form. Very unexpected. Something I would not have believed possible 5 or 10 years ago, but there it is.

As to whether we could create a "sentient" AI, an AGI, I don't see any reason we shouldn't be able to. But it's clear to me that something else is needed, besides intelligence. Maybe it's agency, maybe it's something else (the experience of times passage?). We probably need to ways of measuring and evaluating these other things before we can progress further.