Any security risks running these Chinese LLMs on my local computer?
Always a possibility with custom runtimes, but the weights alone do not pose any form of malicious code risk. The asterisk there is allowing them to run arbitrary commands on your computer but that is ALWAYS a massive risk with these things. That risk is not from who trained the model.
I could have missed a paper but it seems very unlikely even closed door research has gotten to the stage of maliciously tuning models to surreptitiously backdoor someone's machine in a way that wouldn't be very easy to catch.
Your threat model may vary.
It's an interesting question! In my opinion, if you don't use tools it's very unlikely it can do any harm. I doubt the model files can be engineered to overflow llama.cpp or ollama, or cause any other damage, directly.
But if you use tools, for example for extending its knowledge through web searches, it could be used to exfiltrate information. It could do it by visiting some specially crafted url's to leak parts of your prompts (this includes the contents of documents added to them with RAG).
If given an interpreter, even if sandboxed, could try to do some kind of sabotage or "call home" with locally gathered information, obviously disguised as safe "regular" code.
It's unlikely that a current model that is runnable in "domestic" hardware could have those capabilities, but in the future these concerns will be more relevant.
The model itself poses no risks (beyond potentially saying things you would prefer not to see).
The code that comes with the model should be treated like any other untrusted code.
Just based on the stage of the game I'd say it's not likely, but the possibilities are there:
https://news.ycombinator.com/item?id=43121383
It would have to be from unsupervised tool usage or accepting backdoored code, not traditional remote execution from merely inferencing the weights.