manmal 7 days ago

At FP16 you‘d need 64GB just for the weights, and it‘d be 2x as slow as a Q8 version, likely with little improvement. You‘ll also need space for attention and context etc, so 80-100GB (or even more) VRAM would be better.

Many people „just“ use 4x consumer GPUs like the 3090 (24GB each) which scales well. They’d probably buy a mining rig, EPYC CPU, Mainboard with sufficient PCIe lanes, PCIe risers, 1600W PSU (might need to limit the GPUs to 300W), and 128GB RAM. Depending what you pay for the GPUs that‘ll be 3.5-4.5k

2
postalrat 7 days ago

I haven't found a good case/risers/etc I really like. Most the miner stuff wasn't made for PCIe 16x.

manmal 6 days ago

Is that a problem? According to this, the GPUs don’t communicate that much once the weights are loaded: https://github.com/turboderp/exllama/discussions/16#discussi...

> So at FP16 precision that's a grand total of 16 kB you're transmitting over the PCIe bus, once per token. If you multiply by, say, 20 tokens per second, then you're still only using like 0.1% of your PCIe bandwidth.

Intra GPU memory bandwidth is very important, but I‘ve seen lots of people use just a x4 lane and they didn’t complain much.

abraxas 7 days ago

would it be better for energy efficiency and overall performance to use workstation cards like A5000 or A4000? Those can be found on eBay.

manmal 6 days ago

Looks like the A4000 has low memory bandwidth (50% of a 4090?) which is the limiting factor for inference usually. But they are efficient - if you can get them for cheap, probably a good entry setup? If you like running models that need a lot of VRAM, you‘ll likely run out of PCIe slots before you are done upgrading.