Yes. This has to be the worst ram you guys have ever seen but hear me out. Is it possible? I want to run the full 70gb model but that’s far out of question and I’m not even going to bother. Can I atleast run the 13gb or at least the 7gb?

  • m18coppola@alien.top
    cake
    B
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I have run 7B models with Q2_K on my raspberry pi with 4GB lol. It’s kinda slow (still faster than I bargained for), but Q2_K models tend to be pretty stupid at the 7B size, no matter the speed. You can theoretically run a bigger model using swap-space (kind of like using your storage drive as ram), but then the token generation speeds come crawling to a halt.

  • Delicious-View-8688@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Yes. There is an implementation that loads each layer as required - thereby reducing the VRAM requirements. Just Google it. LLaMa 70b with 4GB.

  • DarthInfinix@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Hmm, theoretically if you switch to a super light Linux distro, and get the q2 quantization 7b, using llama cpp where mmap is on by default, you should be able to run a 7b model, provided i can run a 7b on a shitty 150$ Android which has like 3 GB Ram free using llama cpp