Hello LocalLLama.

Do you have tips how to make best use of models that have not been fine-tuned for chat or instruct?

Here’s my issue: I use LLMs for storywriting and making character profiles (I’ve been doing that a lot for D&D character sheets for example).

I feel that most models have a strong bias to make positive stories or happy endings or use really cliched phrases, or something similar. The stories have perfect grammar but they are boring and cliched as heck. Using instructions to tell it not to do that don’t work that well. I checked out r/chatgpt for what tips they have for making good stories when using ChatGPT and it seems there are no great solutions there either. Maybe this leaks to local models because bunch of them use GPT-4 derived training data, so now local models want overly positive outputs as well.

So I thought “Alright. I’ll try using a base model. Instead of giving it instructions, I’ll make it think it’s completing a book or something”.

But that also doesn’t work that well. Lllama-2-70B for example easily gets into repetitive patterns and I feel it’s even worse than using positive-biased chat or instruct-tuned model.

I’m looking for answers or insights into these following thoughts in my head:

  1. Are there any base models worth using? I’ve tried Yi base models for example; seems about the same as Llama2-70B base (just faster). I’m more than willing to spend time prompt engineering in exchange for more interesting outputs.

  2. Do you know resources/tricks/tips/insights about how to make best use of base models? Resources on how to prompt them? Sampler settings?

  3. Why do base models seem to suck so bad, even if I’m prompting them assuming it’s just completing text and they have no concept of following instructions? Mostly I see them fall into repeating the same sentence or structure over and over again. Fine-tuned models don’t do this even if I otherwise don’t like their outputs.

  4. Out of curiosity, are you aware of any models that have been fine-tuned that are not tuned for chat or instruct? Kinda wondering if anyone has found any interesting use cases.

  • Inevitable-Highway85@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Temperature , top_p, and penalty. Take notes as you tune them. Take a raw model , get the dataset structure and fine tune them. I have a 7b model that can comunicarte almost exactly like me Using the recipe above.