A former employee of Cyc did an insightful AMA on HN back in 2019: https://news.ycombinator.com/item?id=21783828
> But the longer I worked there the more I felt like the plan was basically:
> 1. Manually add more and more common-sense knowledge and extend the inference engine
> 2. ???
> 3. AGI!
In retrospect, this reasoning doesn't seem to be so wrong.
I mean if I were to oversimplify and over-abstract AGI into a long list of if / elses, that's how I'd go about it. It's just that there's A Lot to consider.