The author either didn't read the hacker news comments last time, or he missed the top theory that said they probably used chess as a benchmark when they developed the model that is good at chess for whatever business reasons they had at the time.
fwiw this is exactly what i thought - oai pursued it as a skillset (likely using a large chess dataset) for their own reasons and then abandoned it as not particularly beneficial outside chess.
It's still interesting to try to replicate how you would make a generalist LLM good at chess, so i appreciated the post, but I don't think there's a huge mystery!
This is plausible. One of the top chess engines in the world (Leela) is just a neural network trained on billions of chess games.
So it makes sense that an LLM would also be able to acquire some skill by simply having a large volume of chess games in its training data.
OpenAI probably just eventually decided it wasn't useful to keep pursuing chess skill.
Oh really! What happened to the theory that training on code magically caused some high level reasoning ability?