And yet prompts can be optimized.
You can optimize a prompt for a particular LLM model and this can be done only through experimentation. If you take your heavily optimized prompt and apply it to a different model there is a good chance you need to start from scratch.
What you need to do every few months/weeks depending of when the last model was released is to reevaluate your bag of tricks.
At some point it becomes a roulette - you try this, you tray that and maybe it works or maybe not ...