>This is the dimensionality mentioned in the adjacent post
LLMs are only a few years old but symbolic ai was abandoned for NLP, Computer Vision etc long before that. Why ? Because the alternative was just that bad and more importantly, never seemed to really budge with effort. Companies didn't wake up one morning and pour hundreds of millions into LMs. In fact, NNs were the underdog for a very long time. They poured more and more money into it because it got better and better with investment.
There is zero reason to think even more dimensionality would do anything but waste even more time. At least the NN scalers can look back and see it work in the past. You don't even have that.
>an LLM trained on descriptions of dogs is going to hallucinate when an otherwise sensible query about dogs doesn't match its training. As others have said more elegantly than I will, this points to a pretty different cognitive model than humans have; human beings can (and do) give up on a task.
It doesn't take much to train LLMs to 'give up'. Open AI talk about this from time to time. It's just not very useful with a tendency to overcorrect. And humans hallucinate (and otherwise have weird failure modes) all the time. We just call them funny names like dunning kruger and optical illusions. Certainly less than current SOTA LLMs but it happens all the same.
>I feel like I've had to say this a few times in threads now: none of this is to imply that Cyc was a success or would have worked.
The point is not about Cyc. They're hardly the only attempt at non-monotonic logic. The point is that they should work much better than they do if there's anything to it. Again, forget recent LLMs. Even when we were doing 'humble' things like spelling error detection & correction, text compressors, voice transcription boosters, embeddings for information retrieval, recommenders, knowledge graph creation (ironically enough), machine translation services, etc these systems were not even in the conversation. They performed that poorly.
I think we're talking past each other. I'm not interested in defending symbolic AI at all; it's clear it's failed. All told, I would not say I'm particularly interested in any kind of AI.
I'm interested in theory of mind, and I think defeasibility with a huge number of dimensions is a stronger explanation of how humans behave and think than something resembling an LLM. This doesn't somehow mean that LLMs haven't "won" (they have); I just don't think they're winning at human-like cognition. This in turn does not mean we could build a better alternative, either.