masfuerte 10 days ago

It's funny, because AI companies are currently spending fortunes on mathematicians, physicists, chemists, software engineers, etc. to create good training data.

Maybe this money would be better spent on creating a Lenat-style ontology, but I guess we'll never know.

1
throwanem 10 days ago

We may. LLMs are capable, even arguably at times inventive, but lack the ability to test against ground truth; ontological reasoners can never exceed the implications of the ground truth they're given, but within that scope reason perfectly. These seem like obviously complementary strengths.