Zambyte 6 days ago

There are lots of "alliterated" versions of models too, which is where people will essentially remove the models ability to reject responding to a prompt. The huihui r1 14b alliterated had some trouble telling me about tiananmen square, basically dodging the question by telling me about itself, but after some coaxing I was able to get the info out of it.

I say this because I think that the Perplexity model is tuned on additional information, whereas the alliterated models only include information trained into the underlying model, which is interesting to see.

1
bigfudge 5 days ago

Abliterated? Alliterated LLMs might be fun though…

Zambyte 4 days ago

Oops, yeah I don't know how that got autocorrected three times without my noticing. Abliterated.