Can you make any suggestions for a model that is good for general chat, and is not hyper-woke?

I’ve just had one of the base Llama-2 models tell me it’s offensive to use the word “boys” because it reinforces gender stereotypes. The conversation at the time didn’t even have anything to do with gender or related topics. Any attempt to get it to explain why it thought this resulted in the exact same screen full of boilerplate about how all of society is specifically designed to oppress women and girls. This is one of the more extreme examples, but I’ve had similar responses from a few other models. It’s as if they tried to force their views on gender and related matters into conversations, no matter what they were about. I find it difficult to believe this would be so common if the training had been on a very broad range of texts, and so I suspect a deliberate decision was made to imbue the models with these sorts of ideas.

I’m looking for something that isn’t politically or socially extreme in any direction, and is willing to converse with someone taking a variety of views on such topics.

  • fediverser@alien.top
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    This post is an automated archive from a submission made on /r/LocalLLaMA, powered by Fediverser software running on alien.top. Responses to this submission will not be seen by the original author until they claim ownership of their alien.top account. Please consider reaching out to them let them know about this post and help them migrate to Lemmy.

    Lemmy users: you are still very much encouraged to participate in the discussion. There are still many other subscribers on !localllama@poweruser.forum that can benefit from your contribution and join in the conversation.

    Reddit users: you can also join the fediverse right away by getting by visiting https://portal.alien.top. If you are looking for a Reddit alternative made for and by an independent community, check out Fediverser.

    • VertexMachine@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Don’t know OP and below is not aimed at him. But most people call stuff ‘unbiased’ if it’s aligned with their own biases. “Outsmarting” your own brain and self-awareness on meta level is really hard.

  • metaprotium@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Hard to say. You’d probably be better off trying a model that’s been fine tuned for use as an assistant. It also helps to add stuff as a system prompt to guide the model, assuming you pick an instruction fine tuned one. Id be surprised if that failed but try not to judge the models too harshly if their views align with an average of the training data. In my (admittedly, limited) experience, none of the models are ‘woke’ as you say. They’re very average. Makes sense given what they were trained on. Perhaps you will find that human bias is a user, and not model, error.

  • slingbagwarrior@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Just a thought, is it possible to prompt your model something like “You are a politically balanced AI model” or something along those lines? Or perhaps if you are looking for more nuance in the viewpoints of the model, perhaps try prompting it to give both liberal and conservative opinions when you are discussing controversial issues with it?

    Otherwise I believe any alignment-free / uncensored model should work

  • a_beautiful_rhind@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Good luck. Centrism is not allowed. You would have to skip the last decade of internet data. Social engineering works for both people and language models much the same.