amichail 5 days ago

Sometimes the prompt is fine but the code generated has bug(s).

So you tell the AI about the bug(s) and it tries to fix them and sometimes it fails to do so.

I don't think LLMs even try to debug their code by running it in the debugger.

1
Pinkthinker 5 days ago

I don’t think you have ever written financial software. To take an example, you are not going to be able with a Chat GPT prompt to ask it to price a specific bond in a mortgage backed security. It’s hard enough to do it using structured pseudocode.