But
a) The pile of LLM training data is vastly larger. b) The data is actual human utterances in situ--these are ponies, not pony shit. c) LLMs have no intelligence ... they channel the intelligence of a vast number of humans by pattern matching their utterances to a query. This has indeed proved useful because of how extremely well the statistical apparatus works, but the fact that LLMs have no cognitive states puts great limits on what this technology can achieve.
With Cyc, OTOH, it's not even clear what you can get out of it. The thing may well be useful if combined with LLMs, but it's under lock and key.
The big conclusions about symbolic AI that the author reaches based on this one system and approach are unwarranted. As he himself notes, "Even Ernest Davis and Gary Marcus, highly sympathetic to the symbolic approach to AI, found little evidence for the success of Cyc, not because Cyc had provably failed, but simply because there was too little evidence in any direction, success or failure."
>> they channel the intelligence of a vast number of humans by pattern matching their utterances to a query.
Just a little problem with that: to understand the utterances of a vast number of humans you need to channel it to something that can understand the utterances of humans in the first place. Just channeling it around from statistic to statistic doesn't do the trick.
Um, the "something" is the person reading the LLM's output. I'm afraid that you have completely missed the context and point of the discussion, which was not about LLMs understanding things--they understand nothing ("LLMs have no cognitive states"). But again, "because of how extremely well the statistical apparatus works", their outputs are useful to intelligent consumers who do have cognitive states--us.