I don't understand why people get so up in the arms about the Chinese room. It's very clear that a major part of human intelligence is a mental model of the physical world, and linguistic concepts have an (often complex) relationship to that model. There's no magic here. Nothing about that argument implies anything about neurons. The process of forming a mental model of the world and mapping words onto it could easily take place within many many neurons within the human brain, because it does! It does not take place in an LLM. That does not imply that nobody will ever develop a positronic brain that could do the same. We just clearly haven't done so yet.
Saying, "if you can't point to the neuron that does X, then you can't prove X happens" isn't a scientific perspective. It's a willfully ignorant one. If you're confident in the scientific process, then we will eventually understand how all kinds of human mental processes make sense in the context of neural networks.
The point is that all the Chinese room is is a play to absurdity. That because opening the box reveals mechanisms that we would not call understanding does not mean the system, the Chinese room does not understand. The neuron comparison is to demonstrate that very fact. The brain is a Chinese room. It doesn't have to be relegated to a neuron, feel free to open the box and show any of us what happens in there that we would call understanding.
>It does not take place in an LLM.
I don't know what else to tell you but LLMs absolutely model concepts and the physical world, separate from the words that describe them. This has been demonstrated several times.