I’m struggling to get the 7b models to do something useful, obviously I’m doing something wrong as it appears many people strive for 7b models.
But myself I can not get them to follow instructions, they keep repeating stuff and occasionally they start to converse with themselves.
Does anyone have any pointers what I’m doing wrong?
anecdotally, I keep going back to a 13b one…
Falcon – 7B fine tuned is pretty powerful. Within domain, and in a RAG stack it out performs GPT – 3.5.
Thanks
They generally good for single shot or low shot tasks. Eg get cliff notes , create templates . You can use vector db for informational accuracy. They struggle to keep character and context iv noticed.
Okay thanks
OpenHermes 2.5 is amazing from what I’ve seen. it can call functions, summarize text, is extremely competitive, all the works
How does it function call? Some internal api?
it outputs the call
It returns a JSON with function name and respective arguments which you can parse later in the program and call the function with those arguments given by the model.
I’m seconding that. I’m actually amazed by how it performs, frequently getting similar or better answers than bigger models. I start to think that we do lose a lot with quantization from the bigger models…
Haven’t you noticed slower inference from OpenHermes 2.5 compared to other 7B models?
Can you provide the prompt for function call?
Mistral 7B instruct can get you pretty far. Even the quantized model has been pretty useful for me.
Thanks
I use airoboros 7b to give me plots and write the beginnings of stories. It’s very useful for that.
Try to use the instruct models like Mistral. Ensure your template is the correct one a well.
How do you find the right template?
It should be model page on HuggingFace, they also have a explicit template module which you can import automatically when interacting using model-id.
Llama ones are forgiving for not using structure but the mistral-instruct is very bad if structure is not maintained
Llama-2 chat, Mistral, Zephyr, and Open Hermes 2.5 are great 7B models for fine-tuning. I have experimented with these and was able to get great results for summarization, and RAG.
Tried most of them absolutely useless.
I still evaluate and hopefully thanks to all tips and suggestions here my opinion may change.
Great for rubber-ducking if you’re writing a story.
For instruct specifically, certain models do better with certain things. OpenChat, OpenHermes and Capybara seem to be the best. But they will all underperform next to a good merge/finetune of a 13B model. Depending on the type of instruction one of those will be better than the others.
For repetition this seems to fall away somewhat with very long context sizes. Because of the sliding window, it can handle these context sizes, and if you use something like llamacpp the context can be reused such that you won’t have to process the whole prompt each time.
7b is generally better for creative writing, however, there are as I said, specific types of instructions they will handle well.
Update on this topic…
I realised I’ve made some mistakes, the reason to start with I asked about 7b models is because the computer I’m using is resource constrained (and normally I use a frontend for the actual interaction)
But because I only have 8GB RAM in the computer I decided to go with llama.cpp and this is obviously where things went wrong.
First of all I obviously messed up the prompt, not that I notice any significant difference now when I realised but it did not follow the expected format for the model I was using.
But the key thing appeared to be I’ve been using the -i (interactive) argument and thought it would work like a chat session, well it appears to do for a few queries but as stated in the original post then all of sudden the model starts to converse with itself (filling in for my queries etc).
But it turns out I should have used --instruct all along, and after I realised now things started to work a lot better (although not perfect).Finally I decided to give neural-chat a try and dang it appears to do most things I ask it to with great success.
Thanks all for your feedback and comments.