minus-squareOk_Relationship_9879@alien.topBtoLocalLLaMA•🐺🐦⬛ LLM Comparison/Test: 2x 34B Yi (Dolphin, Nous Capybara) vs. 12x 70B, 120B, ChatGPT/GPT-4linkfedilinkEnglisharrow-up1·1 year agoThat’s pretty amazing. Thanks for all your hard work! Does anyone know if the Nous Capybara 34B is uncensored? linkfedilink
Ok_Relationship_9879@alien.topB to LocalLLaMAEnglish · 1 year agoGPT-4's 128K context window testedplus-squaremessage-squaremessage-square6fedilinkarrow-up11arrow-down10
arrow-up11arrow-down1message-squareGPT-4's 128K context window testedplus-squareOk_Relationship_9879@alien.topB to LocalLLaMAEnglish · 1 year agomessage-square6fedilink
minus-squareOk_Relationship_9879@alien.topBtoLocalLLaMA•For roleplay purposes, Goliath-120b is absolutely thrilling melinkfedilinkEnglisharrow-up1·1 year agoWhich models do you find to be good at 16k context for story writing? linkfedilink
minus-squareOk_Relationship_9879@alien.topBtoLocalLLaMA•RedPajama-Data-v2: an Open Dataset with 30 Trillion Tokens for Training Large Language ModelslinkfedilinkEnglisharrow-up1·1 year agoIf chinchilla is right, this dataset could be huge for small models. https://www.lesswrong.com/posts/6Fpvch8RR29qLEWNH/chinchilla-s-wild-implications linkfedilink
That’s pretty amazing. Thanks for all your hard work!
Does anyone know if the Nous Capybara 34B is uncensored?