I got tired of slow cpu inference as well as Text-Generation-WebUI that’s getting buggier and buggier.
Here’s a working example that offloads all the layers of zephyr-7b-beta.Q6_K.gguf to T4, a free GPU on Colab.
It’s pretty fast! I get 28t/s.
https://colab.research.google.com/gist/chigkim/385e8d398b40c7e61755e1a256aaae64/llama-cpp.ipynb
Koboldcpp also has an official colab, https://colab.research.google.com/github/LostRuins/koboldcpp/blob/concedo/colab.ipynb