furyofantares 7 days ago

LLMs are fundamentally text-completion. The Chat-based tuning that goes on top of it is impressive but they are fundamentally text-completion, that's where most of the training energy goes. I keep this in mind with a lot of my prompting and get good results.

Regurgitating and Examples are both ways to lean into that and try to recover whatever has been lost by Chat-based tuning.

1
zi_ 7 days ago

what else do you think about when prompting, which you've found to be useful?