Found out about air_llm, https://github.com/lyogavin/Anima/tree/main/air_llm, where it loads one layer at a time, allow each layer to be 1.6GB for a 70b with 80 layers. theres about 30mb for kv cache, and i’m not sure where the rest goes.
works with HF out of the box too apparently. The weaknesses appear to be ctxlen, and its gonna be slow, but anyway, anyone want to try goliath 120B unquant?
This seems like a very brilliant and almost obvious idea, is there a reason why this method wasn’t a thing before? Besides the PCIe bandwidth and storage speed requirements.
Because it wouldn’t be any faster than doing CPU inference. Since both CPUs and GPUs are already waiting around for data to process. It’s that i/o that’s the limiter. This changes none of that.
Is here a better way to use bigger models than can fit in RAM\VRAM? I’d want to try 70b or maybe even 120b but I only have 32\8gb.
70b? Q4, llama.cpp, some layers on gpu.
Might need to run Linux to get the system ram usage low enough