cmrdporcupine 10 days ago

I suspect at some point the pendulum will again swing back the other way and symbolic approaches will have some kind of breakthrough and become trendy again. And, I bet it will likely have something to do with accelerating these systems with hardware, much like GPUs have done for neural networks, in order to crunch really large quantities of facts

4
luma 10 days ago

The Bitter Lesson has a few things to say about this.

http://www.incompleteideas.net/IncIdeas/BitterLesson.html

wzdd 10 days ago

The Bitter Lesson says "general methods that leverage computation are ultimately the most effective". That doesn't seem to rule out symbolic approaches. It does rule out anything which relies on having humans in the loop, because terabytes of data plus a dumb learning process works better than megabytes of data plus expert instruction.

(I know your message wasn't claiming that The Bitter Lesson was explicitly a counterpoint, I just thought it was interesting.)

bcoates 10 days ago

Imho, this is wrong. Even independent of access to vast amounts of compute, symbolic methods seem to consistently underperform statistical/numerical ones across a wide variety of domains. I can't help but think that there's more to it than just brute force.

YeGoblynQueenne 9 days ago

I've lost count how many times I've written the same words in this thread but: SAT Solving, Automated Theorem Proving, Program Verification and Model Checking, Planning and Scheduling. These are not domains where symbolic methods "consistently underperform" anything.

You guys really need to look into what's been going on in classical AI in the last 20-30 years. There are two large conferences that are mainly about symbolic AI, IJCAI and AAAI. Then there's all the individual conferences on the above sub-fields, like the International Conference on Automated Planning and Scheduling (ICAPS). Don't expect to hear about symbolic AI on social media or press releases from Alpha and Meta, but there's plenty of material online if you're interested.

kevin_thibedeau 10 days ago

Real AGI will need a way to reason about factual knowledge. An ontology is a useful framework for establishing facts without inferring them from messy human language.

IshKebab 10 days ago

These guys are trying to combine symbolic reasoning with LLMs somehow: https://www.symbolica.ai/

specialgoodness 10 days ago

check out Imandra's platform for neurosymbolic AI - https://www.imandra.ai/

whiplash451 10 days ago

Or maybe program synthesis combined by LLMs might be the way?

cmrdporcupine 10 days ago

It does seem like the Cyc people hit the wall with simply collecting facts. Having to have a human in the loop.

The problem I think is if you have LLMs figuring out the propositions, the whole system is just as prone to garbage-in-garbage-out as LLMs are.