My understanding is that models are currently undertrained and not very "dense", so Q4 doesn't hurt very much now but it may in future denser models.
That may well be true. I know that earlier models like Llama 1 65B could tolerate more aggressive quantization, which supports that idea.