I guess that's a fine distinction I don't make. If the problem requires the AI to write a prolog program to solve, and it is capable of writing the necessary prolog code, then I don't see the practical or philosophical difference from taking the transitive step and saying the AI solved it. If I asked you to solve an air traffic control problem and you did so by writing prolog, no one would try to claim you weren't capable of solving it.
Agentic LLMs can solve complicated reasoning and scheduling problems, by writing special-purpose solutions (which might resemble the things we call GOFAI). It's the nature of AGI--which LLMs assuredly are--that they can solve problems by inventing specialized tools, just as we do.
Can you show us a log from when you gave an LLM a scheduling problem or something and it decided to solve it with Prolog or Z3 or something?
On mobile so I’m not sure how to export a chat log, but the following prompts worked with ChatGPT:
1: I need to schedule scientific operations for a space probe, given a lot of hard instrument and schedule constraints. Please write a program to do this. Use the best tool for the job, no matter how obscure.
2: This is a high-value NASA space mission and so we only get one shot at it. We need to make absolutely sure that the solution is correct and optimal, ideally with proofs.
3: Please code me up a full example, making up appropriate input data for the purpose of illustration
I got an implementation that at first glance looks correct using the MiniZinc constraint solver. I’m sure people could quibble, but I was not trying to lead the model in any way. The second prompt was because the first generated a simple python program, and I think it was because I didn’t specify that it was a high value project that needed mission assurance at the start. A better initial prompt would’ve gotten the desired result on the first try.