/Some/ people bullshit themselves stating the plausible; others check their hypotheses.
The difference is total in both humans and automated processes.
Everyone, every last one of us, does this every single day, all day, and only occasionally do we deviate to check ourselves, and often then it's to save face.
A Nobel prize was given for related research to Daniel Kahneman.
If you think it doesn't apply to you, you're definitely wrong.
> occasionally
Properly educated people do it regularly, not occasionally. You are describing a definite set of people. No, it does not cover all.
Some people will output a pre-given answer; some people check.
Edit: sniper... Find some argument.
Your decisions shape your preferences just as much as your preferences shape your decisions and you're not even aware of it. Yes, everybody regularly confabulates plausible sounding things that they themselves genuinely believe to be the 'real reason'. You're not immune or special.
I will check the article with more attention as soon as I will have the time, but: putting aside a question on how would a similar investigation prove that all people would function in the same way,
that does not seem to counter that some people «check their hypotheses» - as duly. Some people do exercise critical thinking. It is an intentional process.
You're not getting it.
You ask A "Why did you choose that?" > He answers "I like the color blue"
This makes sense. This is what everyone thinks and believes is the actual sequence of such events.
But often, this is the actual sequence "Let's go with this" > "Now i like the color blue"
'A' didn't lie to you or try to trick you. He didn't consciously rationalize liking blue after the fact. He's not stupid or "prone to bad thinking". Altering your perceptions of events without your conscious awareness is just simply something that your brain does fairly regularly.
Make no mistake. A genuinely likes blue now - the only difference is that he genuinely believes he made the choice because he liked blue instead of the brain having the tendency to make you favor your choices and giving him the like of blue so it sits better.
This is not something you "check your hypotheses" out of. And it's something every human deals with everyday, including you.
I get what you are pointing at: you are focusing with some strictness on the post from Stavros, which states that "people pseudo-rationalize with plausible explanatory theories their not-at-the-time-rational behaviour".
But I was instead focusing at the general problem in the root post from Foundry27, and to a loose interpretation of the post from Stavros: the opposition between the faculty of generating convincing fantasies vs the faculty of critical thinking. (Such focus being there because more general and pressing in current AI than the contextual problem of "explanation", which is sort of a "perversion" when compared to the same in classical AI, where the steps are recorded procedurally owing to transparency, instead of the paradox of asking an obscure unreliable engine "what it did".)
What I meant is that a general scheme of bullshitting to oneself and pseudo-rationalizing it is not the only way. Please see the other sub-branch in which we talked about mathematics. In important cases, the fantasies are then consciously checked as thoroughly as constraints allow.
So I stated «/Some/ people bullshit themselves stating the plausible; others check their hypotheses ... Some people will output a pre-given answer; some people check» - as a crucial discriminator in the natural and artificial. Please note that the trend in the past two years has generated a believe in some that the at most preliminary part is all that there is.
Also note that Katskul wrote «only occasionally do we deviate to check ourselves» - so the reply is "No: the more one is educated and intellectually trained, the more one's thoughts are vetted - the thought process is disciplined to check its objects".
But I see re-checking the branch that the post from Stavros was strongly specific towards the "smaller" area of "pseudo-rationalizing", so I see why my posts may have looked odd-fitting.
By the way: I have seldom come across a post so weak.
> every last one of us
And how do you prove it.
> A Nobel prize was given
So?
> If you think, you
Prove it.
Support it, at least. That is not discussion.
How are you going to check your hypotheses for why you preferred that jacket to that other jacket?
Do not lose the original point: some systems have a goal to sound plausible, while some have a goal to say the truth. Some systems, when asked "where have you been", will reply "at the baker's" because it is a nice narrative in their "novel writing, re-writing of reality", some other will check memory and say "at the butcher's", where they have actually been.
When people invent explicit reasons on why they turned left or right, those reasons remain hypotheses. The clumsy will promote those hypotheses to beliefs. The apt will keep the spontaneous ideas as hypotheses, until the ability to assess them comes.
Everybody promotes these sorts of hypotheses to beliefs because it's not a conscious decision you are aware of. It's not about being clumsy or apt. You don't have much control over it.
It does not matter, that there may be a tendency towards bad thinking: what matters is the possibility of proper thinking and the training towards it (becoming more and more proficient at it and practicing it constantly, having it as your natural state; in automation, implementing it in the process).
What you control is the intentional revision of thought.
(I am acquainted with earlier studies about the corpus callosum but I do not know why you would mention that, what it would prove: maybe you could be clearer? I do not see how it could affect the notion of critical thinking.)
I've explained it the best i can in the other comment. But you keep making the mistake that this is just a culprit of 'bad thinking' or 'intentional revision of thought' and while i'm not saying those things don't exist, It's not.
Not only are the rationalizations i'm talking about and which some of these papers allude to not intentional, they often happen without your conscious awareness.
On my having come with percussions at the strings meeting see the other reply.
I want to check the papers you proposed as soon as I will have the time: I find it difficult to believe that the conscious cannot intercept those "changes of mind" and correct them.
But please note: you are writing «Not only are ... not intentional»... Immature thought needs not to be intentional at all: it is largely spontaneous thought. But whether part of an intentional process ("let us ponder towards some goal"), or whether part of the subterranean functions, when it becomes visible (or «intercepted» as I wrote above), the trained mind looks at it with diffidence and asks questions about its foundations - intentionally, in the conscious, as a learnt process.
Is that example representative for the LLM tasks for which we seek explainability ?
Are we holding LLMs to a higher standard than people?
Ideally yes, LLMs are tools that we expect to work, people are inherently fallible and (even unintentionally) deceptive. LLMs being human-like in this specific way is not desirable.
Then I think you'll be very disappointed. LLMs aren't in the same category as calculators, for example.
I have no illusions on LLMs, I have been working with them since og BERT, always with these same issues and more. I'm just stating what would be needed in the future to make them reliably useful outside of creative writing & (human-guided & checked) search.
If an LLM provides an incorrect/orthogonal rhetoric without a way to reliably fix/debug it it's just not as useful as it theoretically could be given the data contained in the parameters.