I’m using a100 pcie 80g. Cuda11.8 toolkit 525.x

But when i inference codellama 13b with oobabooga(web ui)

It just make 5tokens/s

It is so slow.

Is there any config or something else for a100???

  • opi098514@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Sounds like you might be using the standard transformer loader. Try exllama or exlamav2