> we have absolutely no way to know
To me, this means that it absolutely doesn't matter whether LLM does reason or not.
It might if AI/LLM safety is a concern. We can't begin to really judge safety without understanding how they work internally.