og_kalu 8 days ago

>which is exactly what you'd expect for a sound set of defeasible relations.

This is a leap. While a complex system of rules might coincidentally produce behavior that looks statistically optimal in some scenarios, the paper (Ernst & Banks) argues that the mechanism itself operates according to statistical principles (MLE), not just that the outcome happens to look that way.

Moreover, it's highly unlikely, bordering on impossible, to reduce the situations the brain deals with even on a daily basis into a set of defeasible statements.

Example: Recognizing a "Dog"

Defeasible Attempt: is_dog(X) :- has_four_legs(X), has_tail(X), barks(X), not is_cat(X), not is_fox(X), not is_robot_dog(X).

is_dog(X) :- has_four_legs(X), wags_tail(X), is_friendly_to_humans(X), not is_wolf(X).

How do you define barks(X) (what about whimpers, growls? What about a dog that doesn't bark?)? How do you handle breeds that look very different (Chihuahua vs. Great Dane)? How do you handle seeing only part of the animal? How do you represent the overall visual gestalt? The number of rules and exceptions quickly becomes vast and brittle.

Ultimately, the proof as they say, is in the pudding. By the way, the CyC we are all talking about here is non-monotonic. https://www.cyc.com/wp-content/uploads/2019/07/First-Orderiz...

If you've tried something for decades and it's not working, and it doesn't even look like it's working and experiments with the brain suggest probabilistic inference and probabilistic inference machines work much better than the alternatives ever did, you have to face the music.

1
woodruffw 8 days ago

> How do you define barks(X) (what about whimpers, growls? What about a dog that doesn't bark?)? How do you handle breeds that look very different (Chihuahua vs. Great Dane)? How do you handle seeing only part of the animal? How do you represent the overall visual gestalt? The number of rules and exceptions quickly becomes vast and brittle.

This is the dimensionality mentioned in the adjacent post, and it's true of a probabilistic approach as well: an LLM trained on descriptions of dogs is going to hallucinate when an otherwise sensible query about dogs doesn't match its training. As others have said more elegantly than I will, this points to a pretty different cognitive model than humans have; human beings can (and do) give up on a task.

(I feel like I've had to say this a few times in threads now: none of this is to imply that Cyc was a success or would have worked.)

og_kalu 8 days ago

>This is the dimensionality mentioned in the adjacent post

LLMs are only a few years old but symbolic ai was abandoned for NLP, Computer Vision etc long before that. Why ? Because the alternative was just that bad and more importantly, never seemed to really budge with effort. Companies didn't wake up one morning and pour hundreds of millions into LMs. In fact, NNs were the underdog for a very long time. They poured more and more money into it because it got better and better with investment.

There is zero reason to think even more dimensionality would do anything but waste even more time. At least the NN scalers can look back and see it work in the past. You don't even have that.

>an LLM trained on descriptions of dogs is going to hallucinate when an otherwise sensible query about dogs doesn't match its training. As others have said more elegantly than I will, this points to a pretty different cognitive model than humans have; human beings can (and do) give up on a task.

It doesn't take much to train LLMs to 'give up'. Open AI talk about this from time to time. It's just not very useful with a tendency to overcorrect. And humans hallucinate (and otherwise have weird failure modes) all the time. We just call them funny names like dunning kruger and optical illusions. Certainly less than current SOTA LLMs but it happens all the same.

>I feel like I've had to say this a few times in threads now: none of this is to imply that Cyc was a success or would have worked.

The point is not about Cyc. They're hardly the only attempt at non-monotonic logic. The point is that they should work much better than they do if there's anything to it. Again, forget recent LLMs. Even when we were doing 'humble' things like spelling error detection & correction, text compressors, voice transcription boosters, embeddings for information retrieval, recommenders, knowledge graph creation (ironically enough), machine translation services, etc these systems were not even in the conversation. They performed that poorly.

woodruffw 8 days ago

I think we're talking past each other. I'm not interested in defending symbolic AI at all; it's clear it's failed. All told, I would not say I'm particularly interested in any kind of AI.

I'm interested in theory of mind, and I think defeasibility with a huge number of dimensions is a stronger explanation of how humans behave and think than something resembling an LLM. This doesn't somehow mean that LLMs haven't "won" (they have); I just don't think they're winning at human-like cognition. This in turn does not mean we could build a better alternative, either.