> One wrong turn, and in the rabbit hole they go never to recover.
I think this is probably at the heart of the best argument against these things as viable tools.
Once you have sufficiently described the problem such that the LLM won't go the wrong way, you've likely already solved most of it yourself.
Tool use with error feedback sounds autonomous but you'll quickly find that the error handling layer is a thin proxy for the human operator's intentions.
Yes, but we dont believe that this is a 'fundamental' problem. We have learnt to guide their actions a lot better and they go down the rabbit a lot less now than when we started out.
True, but on the other hand, there are a bunch of tasks that are just very typing intensive and not really complex.
Especially in GUI development, building forms, charts, etc.
I could imagine that LLMs are a great help here.