LLMs are fundamentally text-completion. The Chat-based tuning that goes on top of it is impressive but they are fundamentally text-completion, that's where most of the training energy goes. I keep this in mind with a lot of my prompting and get good results.
Regurgitating and Examples are both ways to lean into that and try to recover whatever has been lost by Chat-based tuning.