Robin_Message 8 days ago

The weights are aware of the end goal etc. But the model does not have access to these weights in a meaningful way in the chain of thought model.

So the model thinks ahead but cannot reason about it's own thinking in a real way. It is rationalizing, not rational.

2
senordevnyc 8 days ago

So the model thinks ahead but cannot reason about its own thinking in a real way. It is rationalizing, not rational.

My understanding is that we can’t either. We essentially make up post-hoc stories to explain our thoughts and decisions.

Zee2 8 days ago

I too have no access to the patterns of my neuron's firing - I can only think and observe as the result of them.