syntaxing 6 hours ago

I’m so curious how big Deepseek’s R1-lite is in comparison to this. The Deepseek R1-lite one has been really good so I really hope it’s about the same size and not MoE.

Also I find it interesting how they’re doing a OwO face. Not gonna lie, it’s a fun name.

2
Havoc 5 hours ago

I’d be surprised if theirs isn’t MoE

Then again with the way consumer ram is trending moe might end up more favourable for us

pkkkzip 6 hours ago

Forgot about R1, what hardware are you using to run it?

syntaxing 6 hours ago

I haven’t ran QWQ yet, but it’s a 32B. So about 20GB RAM with Q4 quant. Closer to 25GB for the 4_K_M one. You can wait for a day or so for the quantized GGUFs to show up (we should see the Q4 in the next hour or so). I personally use Ollama on an MacBook Pro. It usually takes a day or two for it to show up. Any M series MacBook with 32GB+ of RAM will run this.