anon373839 5 days ago

I would say 9 times out of 10, you will get better results from a Q4 model that’s a size class larger than a smaller model at Q8. But it’s best not to go below Q4.

1
nenaoki 5 days ago

My understanding is that models are currently undertrained and not very "dense", so Q4 doesn't hurt very much now but it may in future denser models.

anon373839 5 days ago

That may well be true. I know that earlier models like Llama 1 65B could tolerate more aggressive quantization, which supports that idea.