jauntywundrkind 6 days ago

Wish I knew better how to estimate what sized video card one needs. HuggingFace link says this is bfloat16, so at least 64GB?

I guess the -7B might run on my 16GB AMD card?

4
zamadatix 6 days ago

https://huggingface.co/spaces/NyxKrage/LLM-Model-VRAM-Calcul...

That will help you quickly calculate the model VRAM usage as well as the VRAM usage of the context length you want to use. You can put "Qwen/Qwen2.5-VL-32B-Instruct" in the "Model (unquantized)" field. Funnily enough the calculator lacks the option to see without quantizing the model, usually because nobody worried about VRAM bothers running >8 bit quants.

azinman2 6 days ago

Except when it comes to deepseek

zamadatix 6 days ago

For others not as familiar, this is pointing out DeepSeek-v3/DeepSeek-R1 are natively FP8 so selecting "Q8_0" aligns with not selecting quantization for that model (though you'll need ~1 TB of memory to use these model unquantized at full context). Importantly, this does not apply to the "DeepSeek" distills of other models, which retain natively being the same as the base model they distill.

I expect more and more worthwhile models to natively have <16 bit weights as time goes on but for the moment it's pretty much "8 bit DeepSeek and some research/testing models of various parameter width".

azinman2 5 days ago

I wish deepseek distills were somehow branded differently. The amount of confusion I’ve come across from otherwise technical folk, or simply mislabeling (I’m running r1 on my MacBook!) is shocking. It’s my new pet peeve.

xiphias2 6 days ago

I wish they would start producing graphs with quantized version performances as well. What matters is RAM/bandwidth vs performance, not number of parameters.

wgd 6 days ago

You can run 4-bit quantized version at a small (though nonzero) cost to output quality, so you would only need 16GB for that.

Also it's entirely possible to run a model that doesn't fit in available GPU memory, it will just be slower.

clear_view 6 days ago

deepseek-r1:14b/mistral-small:24b/qwen2.5-coder:14b fit 16GB VRAM with fast generation. 32b versions bleed into RAM and take a serious performance hit but still usable.