Void_0000@alien.topBtoLocalLLaMA•Why can't we just run local reinforcement learning?English
1·
1 year agoHow hard can it be?
Seriously though, what makes it require more VRAM than regular inference? You’re still loading the same model, aren’t you?
How hard can it be?
Seriously though, what makes it require more VRAM than regular inference? You’re still loading the same model, aren’t you?
So, how long until they sell out to microsoft? ^(/s)