gpt872323@alien.topB to LocalLLaMAEnglish · 1 year agoWhat is the major difference between different frameworks with regards to performance, hardware requirements vs model support? Llama.cpp vs koboldcpp vs local ai vs gpt4all vs Oobaboogaplus-squaremessage-squaremessage-square0fedilinkarrow-up11arrow-down10
arrow-up11arrow-down1message-squareWhat is the major difference between different frameworks with regards to performance, hardware requirements vs model support? Llama.cpp vs koboldcpp vs local ai vs gpt4all vs Oobaboogaplus-squaregpt872323@alien.topB to LocalLLaMAEnglish · 1 year agomessage-square0fedilink
minus-squaregpt872323@alien.topOPBtoLocalLLaMA•3060 Performance with 13b ModellinkfedilinkEnglisharrow-up1·1 year ago thanks linkfedilink
gpt872323@alien.topB to LocalLLaMAEnglish · 1 year ago3060 Performance with 13b Modelplus-squaremessage-squaremessage-square2fedilinkarrow-up11arrow-down10
arrow-up11arrow-down1message-square3060 Performance with 13b Modelplus-squaregpt872323@alien.topB to LocalLLaMAEnglish · 1 year agomessage-square2fedilink
minus-squaregpt872323@alien.topBtoLocalLLaMA•Cheapest way to run local LLMs?linkfedilinkEnglisharrow-up1·1 year agohave this same question. I am thinking of a mini pc more powerful than both and price relatively ok. Mini pc not with nuc rather amd or intel mobile series. linkfedilink