eob 10 days ago

Or vice versa - perhaps some subset of the "thought chains" of Cyc's inference system could be useful training data for LLMs.

1
euroderf 10 days ago

When I first learned about LLMs, what came to mind is some sort of "meeting of the minds" with Cyc. 'Twas not to be, apparently.

imglorp 10 days ago

I view Cyc's role there as a RAG for common sense reasoning. It might prevent models from advising glue on pizza.

    (is-a 'pizza 'food)
    (not (is-a 'glue 'food))
    (for-all i ingredients
      (assert-is-a i 'food))

jes5199 10 days ago

sure but the bigger models don’t make these trivial mistakes, and I’m not sure if translating the LLM english sentences into LISP and trying to check them is going to be more accurate than just training the models better

yellowapple 10 days ago

The bigger models avoid those mistakes by being, well, bigger. Offloading to a structured knowledgebase would achieve the same without the model needing to be bigger. Indeed, the model could be a lot smaller (and a lot less resource-intensive) if it only needed to worry about converting $LANGUAGE queries to Lisp queries and converting Lisp results back into $LANGUAGE results (where $LANGUAGE is the user's natural language, whatever that might be), rather than having to store some approximation of that knowledgebase within itself on top of understanding $LANGUAGE and understanding whatever ad-hoc query/result language it's unconsciously invented for itself.

pfdietz 9 days ago

Beyond just checking for mistakes, it would be interesting to see if Cyc has concepts that the LLMs don't or vice versa. Can we determine this by examining the models' internals?