I got tired of slow cpu inference as well as Text-Generation-WebUI that’s getting buggier and buggier.
Here’s a working example that offloads all the layers of zephyr-7b-beta.Q6_K.gguf to T4, a free GPU on Colab.
It’s pretty fast! I get 28t/s.
https://colab.research.google.com/gist/chigkim/385e8d398b40c7e61755e1a256aaae64/llama-cpp.ipynb
Great work. It would be nice to have some caching and automatically detecting and re-running the Colab session, as others have pointed out.