There are lots of "alliterated" versions of models too, which is where people will essentially remove the models ability to reject responding to a prompt. The huihui r1 14b alliterated had some trouble telling me about tiananmen square, basically dodging the question by telling me about itself, but after some coaxing I was able to get the info out of it.
I say this because I think that the Perplexity model is tuned on additional information, whereas the alliterated models only include information trained into the underlying model, which is interesting to see.