> I don’t think most people assign actual probabilities to events in their lives, and certainly not rigorous ones in any case.
Interesting. I don't think I agree.
I think people do assign actual probabilities to events. We just do it with a different part of our brain than the part which understands what numbers are. You can tell you do that by thinking through potential bets. For example, if someone (with no special knowledge) offered a 50/50 bet that your dining chair will break next time you sit on it, well, that sounds like a safe bet! Easy money! What about if the odds changed - so, if it breaks you give them $60, and if it doesn't break they give you $40? I'd still take that bet. What about 100-1 odds? 1000-1? There's some point where you start to say "no no, I don't want to take that bet." or even "I'd take that bet if we swap sides".
Somewhere in our minds, we hold an intuition around the probability of different events. But I think it takes a bit of work to turn that intuition into a specific number. We use that intuition for a lot of things - like, to calibrate how much surprise we feel when our expectation is violated. And to intuitively decide how much we should think through all the alternatives. If we place a bet on a coin flip, I'll think through what happens if the coin comes up heads or if it comes up tails. But if I walk into the kitchen, I don't think about the case that I accidentally stub my toe. My intuition assigns that a low enough probability that I don't think about it.
Talking about defeasible statements only scratches the surface of how complex our conditional probability reasoning is. In one sense, a transformer model is just that - an entire transformer based LLM is just a conditional probability reasoning system. The entire model of however many billions of parameters is all a big conditional probability reasoning machine who's only task is to figure out the probability distribution over the subsequent token in a stream. And 100bn parameter models are clearly still too small to hit the sweet spot. They keep getting smarter as we add more tokens. If you torture an LLM model a little, you can even get it to spit out exact probability predictions. Just like our human minds.
I think these kind of expert systems fail because they can't do the complex probability reasoning that transformer models do. (And if they could, it would be impossible to manually write out the - perhaps billions - of rules it would need to accurately reason about the world like chatgpt can.)
> I think people do assign actual probabilities to events. We just do it with a different part of our brain than the part which understands what numbers are. You can tell you do that by thinking through potential bets.
I think these are different things! I can definitely make myself think about probabilities, but that's a cognitive operation rather than a meta-cognitive one.
In other words: I think what you're describing as "a bit of work" around intuitions is our rationalization (i.e., quantification) of an process that's internally non-statistical, but defeasible instead. Defeasibility relationships can have priorities and staggerings, which we turn into fuzzy likelihoods when we express them.
My intuition for this comes from our inability to be confidently precise in our probabilistic rationalizations: I don't know about you, but I don't know whether I'm 57.1% or 57.01983% confident in an expression. I could make one up, but as you note with torturing the LLM, I'm doing it to "make progress," not because it's a true statement of probability.
(I think expert systems fail for a reason that's essentially not about probability reasoning, but dimensionality -- as the article mentions Cyc has at least 12 dimensions, but there's no reason to believe our thoughts have only or exactly these 12. There's also no reason to believe we can ever model the number of dimensions needed, given that adding dimensions to an encoded relation set is brutally exponential.)
>My intuition for this comes from our inability to be confidently precise in our probabilistic rationalizations: I don't know about you, but I don't know whether I'm 57.1% or 57.01983% confident in an expression.
LLMs are probabilistic and notoriously unable to be confidently precise in their probabilistic rationalizations.
> LLMs are probabilistic and notoriously unable to be confidently precise in their probabilistic rationalizations.
Sure. To tie these threads together: I think there are sufficient other different properties that make me reasonably confident that my thought process isn't like an LLM's.
(Humans are imprecise, LLMs are imprecise, thermometers are imprecise, but don't stick me or my computer in an oven, please.)
>Sure. To tie these threads together: I think there are sufficient other different properties that make me reasonably confident that my thought process isn't like an LLM's.
Doesn't have to be like an LLM's to be probabilistic