rglullis@communick.newsEnglish · 2 months agoBuild a Fully Local RAG App With PostgreSQL, Mistral, and Ollamaplus-squarewww.timescale.comexternal-linkmessage-square0fedilinkarrow-up11arrow-down10
arrow-up11arrow-down1external-linkBuild a Fully Local RAG App With PostgreSQL, Mistral, and Ollamaplus-squarewww.timescale.comrglullis@communick.newsEnglish · 2 months agomessage-square0fedilink
zerokerim@alien.topBEnglish · 1 year agoJailbreak prompts for Llama ?plus-squaremessage-squaremessage-square2fedilinkarrow-up11arrow-down10
arrow-up11arrow-down1message-squareJailbreak prompts for Llama ?plus-squarezerokerim@alien.topBEnglish · 1 year agomessage-square2fedilink
learning_hedonism@alien.topBEnglish · 1 year agoBest open/commercial model that is tuned on ChatGPT4?plus-squaremessage-squaremessage-square1fedilinkarrow-up11arrow-down10
arrow-up11arrow-down1message-squareBest open/commercial model that is tuned on ChatGPT4?plus-squarelearning_hedonism@alien.topBEnglish · 1 year agomessage-square1fedilink
LivingDracula@alien.topBEnglish · 1 year agoJust curious, are there any GUIs for a creating LLaMA2 architecture similar to how OpenAi does "custom GPTs"?plus-squaremessage-squaremessage-square2fedilinkarrow-up11arrow-down10
arrow-up11arrow-down1message-squareJust curious, are there any GUIs for a creating LLaMA2 architecture similar to how OpenAi does "custom GPTs"?plus-squareLivingDracula@alien.topBEnglish · 1 year agomessage-square2fedilink
sandys1@alien.topBEnglish · 1 year agowhich is the best model (finetuned or base) to extract structured data from a bunch of text?plus-squaremessage-squaremessage-square6fedilinkarrow-up11arrow-down10
arrow-up11arrow-down1message-squarewhich is the best model (finetuned or base) to extract structured data from a bunch of text?plus-squaresandys1@alien.topBEnglish · 1 year agomessage-square6fedilink
Shoddy_Vegetable_115@alien.topBEnglish · 1 year agoIs RAG better with fine tuning on same data or pure RAG FTW?plus-squaremessage-squaremessage-square1fedilinkarrow-up11arrow-down10
arrow-up11arrow-down1message-squareIs RAG better with fine tuning on same data or pure RAG FTW?plus-squareShoddy_Vegetable_115@alien.topBEnglish · 1 year agomessage-square1fedilink
kadhi_chawal2@alien.topBEnglish · 1 year agoHow to start red teaming on llms ?plus-squaremessage-squaremessage-square1fedilinkarrow-up11arrow-down10
arrow-up11arrow-down1message-squareHow to start red teaming on llms ?plus-squarekadhi_chawal2@alien.topBEnglish · 1 year agomessage-square1fedilink
ForsookComparison@alien.topBEnglish · 1 year agoCheapest GPU/Way to run 30b or 34b "Code" Models with GPT4ALL?plus-squaremessage-squaremessage-square1fedilinkarrow-up11arrow-down10
arrow-up11arrow-down1message-squareCheapest GPU/Way to run 30b or 34b "Code" Models with GPT4ALL?plus-squareForsookComparison@alien.topBEnglish · 1 year agomessage-square1fedilink
currytrash97@alien.topBEnglish · 1 year agoA100 inference is much slower than expected with small batch sizeplus-squaremessage-squaremessage-square2fedilinkarrow-up11arrow-down10
arrow-up11arrow-down1message-squareA100 inference is much slower than expected with small batch sizeplus-squarecurrytrash97@alien.topBEnglish · 1 year agomessage-square2fedilink
Grouchy-Mail-2091@alien.topBEnglish · 1 year agoA new dataset for LLM training has been released!plus-squaremessage-squaremessage-square2fedilinkarrow-up11arrow-down10
arrow-up11arrow-down1message-squareA new dataset for LLM training has been released!plus-squareGrouchy-Mail-2091@alien.topBEnglish · 1 year agomessage-square2fedilink
Secret_Joke_2262@alien.topBEnglish · 1 year agoHow to install llama.cpp version for Qwen72B?plus-squaremessage-squaremessage-square1fedilinkarrow-up11arrow-down10
arrow-up11arrow-down1message-squareHow to install llama.cpp version for Qwen72B?plus-squareSecret_Joke_2262@alien.topBEnglish · 1 year agomessage-square1fedilink
Nix_The_Furry@alien.topBEnglish · 1 year agoNous-Hermes-2-Visionplus-squaremessage-squaremessage-square1fedilinkarrow-up11arrow-down10
arrow-up11arrow-down1message-squareNous-Hermes-2-Visionplus-squareNix_The_Furry@alien.topBEnglish · 1 year agomessage-square1fedilink
oobabooga4@alien.topBEnglish · 1 year agoQuIP#: SOTA 2-bit quantization method, now implemented in text-generation-webui (experimental)plus-squaregithub.comexternal-linkmessage-square6fedilinkarrow-up11arrow-down10
arrow-up11arrow-down1external-linkQuIP#: SOTA 2-bit quantization method, now implemented in text-generation-webui (experimental)plus-squaregithub.comoobabooga4@alien.topBEnglish · 1 year agomessage-square6fedilink
PuzzledWhereas991@alien.topBEnglish · 1 year agoIs m1 max macbook pro worth?plus-squaremessage-squaremessage-square3fedilinkarrow-up11arrow-down10
arrow-up11arrow-down1message-squareIs m1 max macbook pro worth?plus-squarePuzzledWhereas991@alien.topBEnglish · 1 year agomessage-square3fedilink
fluffywuffie90210@alien.topBEnglish · 1 year agoAnyone running 3 gpus? Looking for advice on best x670 that might be able to slot a third card on.plus-squaremessage-squaremessage-square3fedilinkarrow-up11arrow-down10
arrow-up11arrow-down1message-squareAnyone running 3 gpus? Looking for advice on best x670 that might be able to slot a third card on.plus-squarefluffywuffie90210@alien.topBEnglish · 1 year agomessage-square3fedilink
noobgolang@alien.topBEnglish · 1 year agoThis model is extremely goodplus-squaremessage-squaremessage-square15fedilinkarrow-up11arrow-down10
arrow-up11arrow-down1message-squareThis model is extremely goodplus-squarenoobgolang@alien.topBEnglish · 1 year agomessage-square15fedilink
Clark9292@alien.topBEnglish · 1 year agoPolitically balanced chat model?plus-squaremessage-squaremessage-square7fedilinkarrow-up11arrow-down10
arrow-up11arrow-down1message-squarePolitically balanced chat model?plus-squareClark9292@alien.topBEnglish · 1 year agomessage-square7fedilink
fakezeta@alien.topBEnglish · 1 year agoOptimum Intel OpenVino Performanceplus-squaremessage-squaremessage-square4fedilinkarrow-up11arrow-down10
arrow-up11arrow-down1message-squareOptimum Intel OpenVino Performanceplus-squarefakezeta@alien.topBEnglish · 1 year agomessage-square4fedilink
roll_left_420@alien.topBEnglish · 1 year agoI refuse to believe my MacBook M1 Pro is faster than my 2070 8Gb Super + i7 8gen (both have 16Gb ram)plus-squaremessage-squaremessage-square2fedilinkarrow-up11arrow-down10
arrow-up11arrow-down1message-squareI refuse to believe my MacBook M1 Pro is faster than my 2070 8Gb Super + i7 8gen (both have 16Gb ram)plus-squareroll_left_420@alien.topBEnglish · 1 year agomessage-square2fedilink
qualaric@alien.topBEnglish · 1 year ago13b models chartplus-squaremessage-squaremessage-square1fedilinkarrow-up11arrow-down10
arrow-up11arrow-down1message-square13b models chartplus-squarequalaric@alien.topBEnglish · 1 year agomessage-square1fedilink