I can fill in anything missing, i would like to go to bed but i did't want to leave anyone hanging. had to come edit a comment i made from my phone, and my phone also doesn't show me replies (i use materialistic, is there a better app?)
https://getdeploying.com/guides/run-deepseek-r1 this is the "how to do it"
https://news.ycombinator.com/item?id=42897205 posted here, a link to how to set it up on an AMD Epyc machine, ~$2000. IIRC a few of the comments discuss how many GPUs you'd need (a lot of the 80GB GPUs, 12-16 i think), plus the mainboards and PSUs and things. however to just run the largest deepseek you merely need memory to hold the model and the context, plus ~10% and i forget why +10% but that's my hedge to be more accurate.
note: i have not checked if LM Studio can run the large deepseek model; i can't fathom a reason it couldn't, at least on the Epyc CPU only build.
note too: I just asked in their discord and it appears "any GGUF model will load if you have the memory for it" - "GGUF" is like the format the model is in. Someone will take whatever format mistral or facebook or whoever publishes and convert it to GGUF format, and from there, someone will start to quantize the models into smaller files (with less ability) as GGUF.
That's $2000 but for just 3.5-4.25 tokens/s? I'm hesitant to say that 4 tokens/s is useless, but that is a tremendous downgrade (although perhaps some smaller model would be usable)
right, but that is CPU only, there's no "tensor cores" in a GPU getting lit up for that 4t/s. So minimum to actually run deepseek is $2000, and the max is, well it's basically whatever you can afford, based on your needs. if you're only running single prompts at any given time, you only need the number of GPUs that will fit the model plus the context (as i mentioned), at minimum your outlay is going to be on the order of $130,000 in just GPUs.
If i can find it later, as i couldn't find it last night when i replied, there is an article that explains how to start adding consumer GPUs or even 1-2 Nvidia A100 80GB GPUs to the epyc build, to speed that up. I have a vague recollection that can get you up to 20t/s or thereabouts, but don't quote me on that, it's been a while.