woodruffw 8 days ago

> I think people do assign actual probabilities to events. We just do it with a different part of our brain than the part which understands what numbers are. You can tell you do that by thinking through potential bets.

I think these are different things! I can definitely make myself think about probabilities, but that's a cognitive operation rather than a meta-cognitive one.

In other words: I think what you're describing as "a bit of work" around intuitions is our rationalization (i.e., quantification) of an process that's internally non-statistical, but defeasible instead. Defeasibility relationships can have priorities and staggerings, which we turn into fuzzy likelihoods when we express them.

My intuition for this comes from our inability to be confidently precise in our probabilistic rationalizations: I don't know about you, but I don't know whether I'm 57.1% or 57.01983% confident in an expression. I could make one up, but as you note with torturing the LLM, I'm doing it to "make progress," not because it's a true statement of probability.

(I think expert systems fail for a reason that's essentially not about probability reasoning, but dimensionality -- as the article mentions Cyc has at least 12 dimensions, but there's no reason to believe our thoughts have only or exactly these 12. There's also no reason to believe we can ever model the number of dimensions needed, given that adding dimensions to an encoded relation set is brutally exponential.)

1
og_kalu 8 days ago

>My intuition for this comes from our inability to be confidently precise in our probabilistic rationalizations: I don't know about you, but I don't know whether I'm 57.1% or 57.01983% confident in an expression.

LLMs are probabilistic and notoriously unable to be confidently precise in their probabilistic rationalizations.

woodruffw 8 days ago

> LLMs are probabilistic and notoriously unable to be confidently precise in their probabilistic rationalizations.

Sure. To tie these threads together: I think there are sufficient other different properties that make me reasonably confident that my thought process isn't like an LLM's.

(Humans are imprecise, LLMs are imprecise, thermometers are imprecise, but don't stick me or my computer in an oven, please.)

og_kalu 8 days ago

>Sure. To tie these threads together: I think there are sufficient other different properties that make me reasonably confident that my thought process isn't like an LLM's.

Doesn't have to be like an LLM's to be probabilistic