So this article is what happens when people who don't really understand the models "test" things.
There are several fatal flaws.
The first problem is that he isn't clearly and concisely displaying the current board state. He is expecting the model to attend a move sequence to figure out the board state.
Secondly, he isn't allowing the model to think elastically using COT or other strategies.
Honestly, I am shocked it is working at all. He has basically formulated the problem in the worst possible way.
I'm not sure COT would help in this situation. I am an amateur at chess but in my experience a large part of playing is intuition and making and I'm not confident the model could even accurately summarise its thinking. There are tasks in which models perform worse on when explaining reasoning. However, this is completely vibes based.
Given my experience with the models, giving it the ability to think would allow it to attend to different ramifications of the current board layout. I would expect a non trivial performance gain.