snthpy 6 days ago

A{rt,I} imitating life

I believe that's why humans reason too. We make snap judgements and then use reason to try to convince others of our beliefs. Can't recall the reference right now but they argued that it's really a tool for social influence. That also explains why people who are good at it find it hard to admit when they are wrong - they're not used to having to do it because they can usually out argue others. Prominent examples are easy to find - X marks de spot.

5
jamesemmott 6 days ago

I wonder if the reference you are reaching for, if it's not the Jonathan Haidt book suggested by a sibling comment, is The Enigma of Reason by the cognitive psychologists Hugo Mercier and Dan Sperber (2017).

In that book (quoting here from the abstract), Mercier and Sperber argue that reason 'is not geared to solitary use, to arriving at better beliefs and decisions on our own', but rather to 'help us justify our beliefs and actions to others, convince them through argumentation, and evaluate the justifications and arguments that others address to us'. Reason, they suggest, 'helps humans better exploit their uniquely rich social environment'.

They resist the idea (popularized by Daniel Kahneman) that there is 'a contrast between intuition and reasoning as if these were two quite different forms of inference', proposing instead that 'reasoning is itself a kind of intuitive inference'. For them, reason as a cognitive mechanism is 'much more opportunistic and eclectic' than is implied by the common association with formal systems like logic. 'The main role of logic in reasoning, we suggest, may well be a rhetorical one: logic helps simplify and schematize intuitive arguments, highlighting and often exaggerating their force.'

Their 'interactionist' perspective helps explain how illogical rhetoric can be so socially powerful; it is reason, 'a cognitive mechanism aimed at justifying oneself and convincing others', fulfilling its evolutionary social function.

Highly recommended, if you're not already familiar.

snthpy 5 days ago

Thank you. That's exactly the idea and described much more eloquently. I probably heard it through the Sapolsky lecture from a sibling comment but that captures it exactly. Bookmarked.

omgwtfbyobbq 6 days ago

I think Robert Sapolsky's lectures on yt cover this to some degree around 115.

https://youtu.be/wLE71i4JJiM?feature=shared

Sometimes our cortex is in charge, sometimes other parts of our brain are, and we can't tell the difference. Regardless, if we try to justify it later, that justification isn't always coherent because we're not always using the part of our brain we consider to be rational.

snthpy 5 days ago

Yes that was probably it because I rewatched that recently. Thanks!

shshshshs 6 days ago

People who are good at reasoning find it hard to admit that they were wrong?

That’s not my experience. People with reason are.. reasonable.

You mention X and that’s not where the reasoners are. That’s where the (wanna be) politicians are. Rhetoric is not all of reasoning.

I can agree that rationalizing snap judgements is one of our capabilities but I am totally unconvinced that it is the totality of our reasoning capabilities. Perhaps I misunderstood.

Hedepig 6 days ago

This is not totally my experience, I've debated a successful engineer who by all accounts has good reasoning skills, but he will absolutely double down on unreasonable ideas he's made on the fly he if can find what he considers a coherent argument behind them. Sometimes if I absolutely can prove him wrong he'll change his mind.

But I think this is ego getting in the way, and our reluctance to change our minds.

We like to point to artificial intelligence and explain how it works differently and then say therefore it's not "true reasoning". I'm not sure that's a good conclusion. We should look at the output and decide. As flawed as it is, I think it's rather impressive

mdp2021 6 days ago

> ego getting in the way

That thing which was in fact identified thousands of years ago as the evil to ditch.

> reluctance to change our minds

That is clumsiness in a general drive that makes sense and is recognized part of the Belief Change Theory: epistemic change is conservative. I.e., when you revise a body of knowledge you do not want to lose valid notions. But conversely, you do not want to be unable to see change or errors, so there is a balance.

> it's not "true reasoning"

If it shows not to explicitly check its "spontaneous" ideas, then it is a correct formula to say 'it's not "true reasoning"'.

Hedepig 5 days ago

> then it is a correct formula to say 'it's not "true reasoning"'

why is that point fundamental?

mdp2021 5 days ago

Because the same way you do not want a human interlocutor to speak out of its dreams, uttering the first ideas that come to mind unvetted, and you want him to instead have thought hard and long and properly and diligently and well, equally you'll want the same from an automation.

Hedepig 5 days ago

If we do figure out how to vet these thoughts, would you call it reasoning?

mdp2021 5 days ago

> vet these thoughts, would you call it reasoning

Probably: other details may be missing, but checking one's ideas is a requirement. The sought engine must have critical thinking.

I have expressed very many times in the past two years, some times at length, always rephrasing it on the spot: the Intelligent entity refines a world model iteratively by assessing its contents.

Hedepig 5 days ago

I do see your point, and it is a good point.

My observation is that the models are better at evaluating than they are generating, this is the technique used in the o1 models. They will use unaligned hidden tokens as "thinking" steps that will include evaluation of previous attempts.

I thought that was a good approach to vetting bad ideas.

mdp2021 4 days ago

> My observation is that the [o1-like] models are better at evaluating than they are generating

This is very good (a very good thing that you see that the out-loud reasoning is working well as judgement),

but we at this stage face an architectural problem. The "model, exemplary" entities will iteratively judge and both * approximate the world model towards progressive truthfulness and completeness, and * refine their judgement abilities and general intellectual proficiency in the process. That (in a way) requires that the main body of knowledge (including "functioning", proficiency over the better processes) is updated. The current architectures I know are static... Instead, we want them to learn: to understand (not memorize) e.g. that Copernicus is better than Ptolemy and to use the gained intellectual keys in subsequent relevant processes.

The main body of knowledge - notions, judgements and abilities - should be affected in a permanent way, to make it grow (like natural minds can).

Hedepig 4 days ago

The static nature of LLMs is a compelling argument against the reasoning ability.

But, it can learn, albeit in a limited way, using the context. Though to my knowledge that doesn't scale well.

fragmede 5 days ago

The smarter a person is, the better they are at rationalizing their decisions. Especially the really stupid decisions.

snthpy 5 days ago

People with reason ... sound reasonable.

I think some prominent people on X who are good at reasoning from First Principles will double down on things rather than admit their mistake.

The other very prominent psychological phenomenon I have observed in the world is "Projection", i.e. the phenomenon of seeing qualities in other people that we have ourselves. I guess it is because we think others would do what we would do ourselves. Trump is a clear example of this - whatever he accuses someone else off, you know he is doing. Point here being that this doubling down on bad reasons in order to not admit my mistakes is something I've observed in myself. Reason does indeed help me to try and overcome it when I recognise it but the tricky part is being able to recognise it.

mdp2021 6 days ago

Already before Galileo we had experiments to determine whether ideas represented reality or not. And in crucial cases, long before that, it meant life or death. This will be clear to engineers.

«Reason» is part of that mechanism of vetting ideas. You experience massive failures without it.

So, no, trained judgement is a real thing, and the presence of innumerable incompetent do not prove an alleged absence of the competent.

briffid 6 days ago

Jonathan Haidt's The Righteous Mind describes this ín details.

snthpy 5 days ago

Thanks