It is worth plodding ahead with symbolic AI research because:
- much less resource hungry / planet warming
- auditable chains of inference
Of course there is still the downside that it doesn't work very well.
Neither does the mainstream approach, given the whole hallucination problem.
But then again, humans "hallucinate" in this sense all the time, too.
I thought the end goal is to make something that's way more than human in capabilities.
The thing is, if we produce something that is worse than humans (and right now LLMs are worse than humans with good search indexes at hand), there's not much point doing it. It's provably less expensive to bear, raise, and educate actual humans. And to educate a human, somehow you don't need to dump the whole internet and all pirated content ever created into their heads.
> LLMs are worse than humans with good search indexes at hand
But "good search indexes" are all LLM-based.
:)