The problem with china is, they will have to figure out latency. Right now DeepSeek models hosted in china are having very high latency. It could because of DDoS and not strong enough infrastructure but probably also because of Great Firewall, runtime censoring prompt and servers physical location (big ping to US and EU countries).
Surely ping time is basically irrelevant dealing with LLMs? It has to be dwarfed by inference time.
> Right now DeepSeek models hosted in china are having very high latency.
If you are talking about DeepSeek's own hosted API service. It's because they deliberately decided to run the service in heavily overloaded conditions and have very aggressive batching policy to extract more out of their (limited) H800s.
Yes, for some reason (the reason I heard is "our boss don't want to run such a business" which sounds absurd but /shrug) they refuse to scale up serving their own models.
> the reason I heard is "our boss don't want to run such a business" which sounds absurd
Liang gave up the No.1 Chinese hedge fund position to create AGI, he has very good chance to short the entire US share market and pocket some stupid amount of $ when R2 is released, he has pretty much unlimited support from local and central Chinese government. Trying to make some pennies from hosting models is not going to sustain what he enjoys now.
tbh the "short the stock market" story is pretty silly, it wasn't predictable at all. but yeah, the guy got to do whatever he want to do now.