HarHarVeryFunny 9 days ago

I think the problem with trying to hand-create symbolic rules for AI is that things like natural language, and the real world, are messy. Even with fuzzy rules you are never going to be able to accurately capture all the context dependencies and nuances, which may anyways be dynamic. Learning from real world data is the only realistic approach, although I don't think language models are the answer either - you need a system that is continually learning and correcting it's own errors.

CYC was an interesting experiment though. Even though it might have been expected to be brittle due to the inevitable knowledge gaps/etc, it seems there was something more fundamentally wrong with the approach for it not to have been more capable. An LLM could also be regarded as an expert system of sorts (learning its own rules from the training data), but some critical differences are perhaps that the LLM's rules are as much about recognizing context for when to apply a rule as what the rule itself is doing, and the rules are generative rather than declarative - directly driving behavior rather than just deductive closure.

1
YeGoblynQueenne 9 days ago

Yeas, hand-coding rules doesn't work in the long run. But burning through the world's resources to approximate a huge dataset isn't a viable long-term solution for anything either.