I have definitely broken chairs upon sitting in them, which someone else could have sat in just fine. So it's unclear why something particular to me would change the chair-ness of an object.
Similarly, I've sat in some very uncomfortable chairs. In fact, I'd say the average chair is not a particularly comfortable one.
For a micro-moment before giving in it was a chair, then it broke. Now its no longer a chair. Its a broken chair.
That's not one, but two particularities that aren't latent to the chair itself: me (the sitter), and time.
Do you really have a personal ontology that requires you to ask the tense and person acting on a thing to know what that thing is? I suspect you don't; most people don't, because it would imply that the chair wouldn't be a chair if nobody sat on it.
A stump isn't a chair until someone decides to sit on it, at that point it becomes chair _to_ that person. Chair is only capable of acting as "chair" object if constraints are met in regards to sitter.
This is very complicated, because it now implies:
1. I can intend to sit on a chair but fail, in which case it isn't a chair (and I didn't intend to sit on it?)
2. I can intend to have my dog sit on my chair, but my dog isn't a person and so my chair isn't a chair.
This is-use distinction you're making is fine; most people have an intuition that things "act" as a thing in relation to how they're used. But to take it a step forwards and claim that a thing isn't its nature until a person sublimates their intent towards it is very unintuitive!
(In my mind, the answer is a lot simpler: a stump isn't a chair, but it's in the family network of things that are sittable, just like chairs and horses. Or to borrow Wittgenstein, a stump bears a family resemblance to a chair.)
I'm the person who asked about the definition of a chair up thread.
Just to make a very obvious point: Nobody thinks of the definition for a chair as a particularly controversial idea. But clearly:
- We don't all agree on what a chair is (is a stump a chair or not?).
- Nobody in this thread has been able to give a widely accepted definition of the word "chair"
- It seems like we can't even agree on what criteria are admissible in the definition. (Eg, does it matter that I can sit on it? Does it matter that I can intend to sit on it? Does it matter that my dog can sit on it?)
If even defining what the word "chair" means is beyond us, I hold little hope that we can ever manually explain the concept to a computer. Returning to my original point above, this is why I think expert systems style approaches are a dead end. Likewise, I think any AI system that uses formal or symbolic logic in its internal definitions will always be limited in its capacity.
And yet, I suspect chatgpt will understand all of the nuance in this conversation just fine. Like everyone else, I'm surprised how "smart" transformer based neural nets have become. But if anything has a hope of achieving AGI, I'm not surprised that:
- Its something that uses a fuzzy, non-symbolic logic internally.
- The "internal language" for its own thoughts is an emergent result of the training process rather than being explicitly and manually programmed in.
- That it translates its internal language of thought into words at the end of the thinking / inference process. Because - as this "chair" example shows - our internal definition for what a chair is is seems clear to us. But it doesn't necessarily mean we can translate that internal definition into a symbolic definition (ie with words).
I'm not convinced that current transformer architectures will get us all the way to AGI / ASI. But I think that to have a hope of achieving human level AI, you'll always want to build a system which has those elements of thought. Cyc, as far as I can tell, does not. So of course, I'm not at all surprised its being dumped.
What if it breaks in a way which renders it no longer a chair for you but not others?
This seems to imply that what is or is not a chair is a subjective or conditional.