woodruffw 9 days ago

This is very complicated, because it now implies:

1. I can intend to sit on a chair but fail, in which case it isn't a chair (and I didn't intend to sit on it?)

2. I can intend to have my dog sit on my chair, but my dog isn't a person and so my chair isn't a chair.

This is-use distinction you're making is fine; most people have an intuition that things "act" as a thing in relation to how they're used. But to take it a step forwards and claim that a thing isn't its nature until a person sublimates their intent towards it is very unintuitive!

(In my mind, the answer is a lot simpler: a stump isn't a chair, but it's in the family network of things that are sittable, just like chairs and horses. Or to borrow Wittgenstein, a stump bears a family resemblance to a chair.)

1
josephg 8 days ago

I'm the person who asked about the definition of a chair up thread.

Just to make a very obvious point: Nobody thinks of the definition for a chair as a particularly controversial idea. But clearly:

- We don't all agree on what a chair is (is a stump a chair or not?).

- Nobody in this thread has been able to give a widely accepted definition of the word "chair"

- It seems like we can't even agree on what criteria are admissible in the definition. (Eg, does it matter that I can sit on it? Does it matter that I can intend to sit on it? Does it matter that my dog can sit on it?)

If even defining what the word "chair" means is beyond us, I hold little hope that we can ever manually explain the concept to a computer. Returning to my original point above, this is why I think expert systems style approaches are a dead end. Likewise, I think any AI system that uses formal or symbolic logic in its internal definitions will always be limited in its capacity.

And yet, I suspect chatgpt will understand all of the nuance in this conversation just fine. Like everyone else, I'm surprised how "smart" transformer based neural nets have become. But if anything has a hope of achieving AGI, I'm not surprised that:

- Its something that uses a fuzzy, non-symbolic logic internally.

- The "internal language" for its own thoughts is an emergent result of the training process rather than being explicitly and manually programmed in.

- That it translates its internal language of thought into words at the end of the thinking / inference process. Because - as this "chair" example shows - our internal definition for what a chair is is seems clear to us. But it doesn't necessarily mean we can translate that internal definition into a symbolic definition (ie with words).

I'm not convinced that current transformer architectures will get us all the way to AGI / ASI. But I think that to have a hope of achieving human level AI, you'll always want to build a system which has those elements of thought. Cyc, as far as I can tell, does not. So of course, I'm not at all surprised its being dumped.