I suspect at some point the pendulum will again swing back the other way and symbolic approaches will have some kind of breakthrough and become trendy again. And, I bet it will likely have something to do with accelerating these systems with hardware, much like GPUs have done for neural networks, in order to crunch really large quantities of facts
The Bitter Lesson has a few things to say about this.
The Bitter Lesson says "general methods that leverage computation are ultimately the most effective". That doesn't seem to rule out symbolic approaches. It does rule out anything which relies on having humans in the loop, because terabytes of data plus a dumb learning process works better than megabytes of data plus expert instruction.
(I know your message wasn't claiming that The Bitter Lesson was explicitly a counterpoint, I just thought it was interesting.)
Imho, this is wrong. Even independent of access to vast amounts of compute, symbolic methods seem to consistently underperform statistical/numerical ones across a wide variety of domains. I can't help but think that there's more to it than just brute force.
I've lost count how many times I've written the same words in this thread but: SAT Solving, Automated Theorem Proving, Program Verification and Model Checking, Planning and Scheduling. These are not domains where symbolic methods "consistently underperform" anything.
You guys really need to look into what's been going on in classical AI in the last 20-30 years. There are two large conferences that are mainly about symbolic AI, IJCAI and AAAI. Then there's all the individual conferences on the above sub-fields, like the International Conference on Automated Planning and Scheduling (ICAPS). Don't expect to hear about symbolic AI on social media or press releases from Alpha and Meta, but there's plenty of material online if you're interested.
Real AGI will need a way to reason about factual knowledge. An ontology is a useful framework for establishing facts without inferring them from messy human language.
These guys are trying to combine symbolic reasoning with LLMs somehow: https://www.symbolica.ai/
Or maybe program synthesis combined by LLMs might be the way?
It does seem like the Cyc people hit the wall with simply collecting facts. Having to have a human in the loop.
The problem I think is if you have LLMs figuring out the propositions, the whole system is just as prone to garbage-in-garbage-out as LLMs are.