1. for coding
  2. for generating stories, writing email, poems etc.
  3. good overall
  4. etc.
        • Illustrious-Lake2603@alien.topB
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          I for one just don’t trust these Chinese models at all. Not saying there’s anything wrong with this but it’s clear it’s aligned with the Chinese agenda when I try to ask it anything about Taiwan. But for coding it works good and you can run it offline

        • Sufficient-Math3178@alien.topB
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          AFAIK models used to be just plain code, when you load one, for example, it would do so by calling a method pickled inside the model file. Uploader could set up this method to do practically anything they want, and it doesn’t need to be obviously malicious since code runs just like a normal python script. For example, it could simply load/render a webp image that is designed to use the recent libwebp vulnerability.

          They changed this a while back, so now you need to pass an argument when loading the model to allow this behavior, and this model requires it.

        • Dry-Vermicelli-682@alien.topB
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          what hardware are you running it on? cpu/gpu, ram, etc? Trying to figure out what I need. My old gen 1 16 core threadripper with 64GB ram doesnt seem to work very well. Multiple minutes for a simple hello response. No GPU though, but looking to put a 6700XT GPU… not sure if that GPU will help a lot or what.

      • danigoncalves@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        I was actually today comparing both (codellama 7B) and man codellama just gave crap, deepseek was vey accurate.

    • Mescallan@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      second for this. I haven’t tested it for code yet, but it’s very enjoyable to converse with. I find it does summaries quite well, I have asked it about a wide range of topics and it has been ~90% correct on the first response, and can kind of fall apart after going back and forth a few times, but it’s only 7b.

    • smile_e_face@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Could you share your temperature and sampler settings for OpenHermes? I see it recommended all over the place, but I get only mediocre results with it in SillyTavern.

    • shivam2979@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Loved the responses from OpenHermes 2.5, however found the inference on the slower side especially when comparing it to other 7B models like Zephyr 7B or Vicuna 1.5 7B

    • JohnExile@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Neural Chat 7b works fine with normal instructions for assistant use, but after trying to give it custom instructions for things like summarization, using code blocks or formatting, it completely broke. The same instructions that worked fine with other models I use. YMMV.

    • Tupletcat@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Which settings do you use for it? Like context, prompt, etc? People swear by Toppy but I’m not really seeing it and I wonder if it’s my configuration.