TheHumanFixer@alien.topB to LocalLLaMAEnglish · 1 year agoIs there really no way you can run 70b models without having a very fast GPU or a lot of ram?plus-squaremessage-squaremessage-square0fedilinkarrow-up11arrow-down10
arrow-up11arrow-down1message-squareIs there really no way you can run 70b models without having a very fast GPU or a lot of ram?plus-squareTheHumanFixer@alien.topB to LocalLLaMAEnglish · 1 year agomessage-square0fedilink
TheHumanFixer@alien.topB to LocalLLaMAEnglish · 1 year agoWill Local LLM just won’t run on a computer with low ram or will it run but be incredibly slow?plus-squaremessage-squaremessage-square0fedilinkarrow-up11arrow-down10
arrow-up11arrow-down1message-squareWill Local LLM just won’t run on a computer with low ram or will it run but be incredibly slow?plus-squareTheHumanFixer@alien.topB to LocalLLaMAEnglish · 1 year agomessage-square0fedilink
minus-squareTheHumanFixer@alien.topOPBtoLocalLLaMA•Is it possible to run Llama on a 4gb ram?linkfedilinkarrow-up1·1 year agoNope regular ram linkfedilink
TheHumanFixer@alien.topB to LocalLLaMAEnglish · 1 year agoIs it possible to run Llama on a 4gb ram?plus-squaremessage-squaremessage-square7fedilinkarrow-up11arrow-down10
arrow-up11arrow-down1message-squareIs it possible to run Llama on a 4gb ram?plus-squareTheHumanFixer@alien.topB to LocalLLaMAEnglish · 1 year agomessage-square7fedilink
Nope regular ram