> Machines are good at applying rules, so when they fail to apply rules correctly, it means they have incorrect, incomplete, or totally absent models.
That's assuming that, somehow, a LLM is a machine. Why would you think that?
Replace the word with one of your own choice if that will help us get to the part where you have a point to make?
I think we are discussing whether LLMs can emulate chess playing machines, regardless of whether they are actually literally composed of a flock of stochastic parrots..
That's simple logic. Quoting you again:
> Machines are good at applying rules, so when they fail to apply rules correctly, it means they have incorrect, incomplete, or totally absent models.
If this line of reasoning applies to machines, but LLMs aren't machines, how can you derive any of these claims?
"A implies B" may be right, but you must first demonstrate A before reaching conclusion B..
> I think we are discussing whether LLMs can emulate chess playing machines
That is incorrect. We're discussing whether LLMs can play chess. Unless you think that human players also emulate chess playing machines?
Engineers really have a hard time coming to terms with probabilistic systems.