The title, pretty much.

I’m wondering whether a 70b model quantized to 4bit would perform better than a 7b/13b/34b model at fp16. Would be great to get some insights from the community.

  • Sea_Particular_4014@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Adding into Automata’s theoretical info, I can say that anecdotally I find 4bit 70B substantially better than 8bit 34B or below, but it’ll depend on your task.

    It seems like right now the 70b are really good for storywriting, RP, logic, etc, while if you’re doing programming or data classification or similar you might be better off with a high precision smaller model that’s been fine-tuned towards the task at hand.

    I noticed in my 70b circle jerk rant thread I posted a couple days ago, most of the people saying they didn’t find the 70b that much better (or better at all) were doing programming or data classification type stuff.

      • Dusty_da_Cat@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        The golden standard is 2 x 3090/4090 cards, which is 48 GBs of VRAM total. You can get by with 2 P40s(Need cooling solution) and run onboard video, if you want to save some money. The speeds will be slower, but still better than running on System RAM on typical setups.

        • Dry-Vermicelli-682@alien.topB
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          44GB of GPU VRAM? WTH GPU has 44GB other than stupid expensive ones? Are average folks running $25K GPUS at home? Or those running these like working for company’s with lots of money and building small GPU servers to run these?

        • harrro@alien.topB
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Using Q3, you can fit it in 36GB (I have a weird combo of RTX 3060 with 12GB and P40 with 24GB and I can run a 70B at 3bit fully on GPU).

            • harrro@alien.topB
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              Yes llama.cpp will automatically split the model to work across GPUs. You can also specify how much of the full model should be on each GPU.

              Not sure on AMD support but for nvidia it’s pretty easy to do.

      • Sea_Particular_4014@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Well… none at all if you’re happy with ~1 token per second or less using GGUF CPU inference.

        I have 1 x 3090 24GB and get about 2 tokens per second with partial offload. I find it usable for most stuff but many people find that too slow.

        You’d need 2 x 3090 or an A6000 or something to do it quickly.