I have tried to set up 3 different versions of it, TheBloke GPTQ/AWQ versions and the original deepseek-coder-6.7b-instruct .
I have tried the 34B as well.
My specs are 64GB ram, 3090Ti , i7 12700k
In AWQ I get just bugged response (“”“”“”“”“”“”“”") until max tokens,
GPTQ works much better, but all versions seem to add unnecessary * at the end of some lines.
and gives worse results than on the website (deepseek.com) Let’s say il ask for a snake game in pygame, it usually gives an unusable version, and after 5-6 tries il get somewhat working version but still il need to ask for a lot of changes.
While on the official website il get the code working on first try, without any problems.
I am using the Alpaca template with adjustment to match the deepseek version (oogabooga webui)
What can cause it? Is the website version different from the huggingface model?
Thank you for the response!
Il try to adjust the temp too, how can I disable samplers in oobabooga? what is the setting?
Is there a way to set rep penalty lower than 1?
unfortunately I haven’t used ooba in a few months so I can’t tell you, but in koboldcpp it just tells you what values disable the samplers.