I got tired of slow cpu inference as well as Text-Generation-WebUI that’s getting buggier and buggier.
Here’s a working example that offloads all the layers of zephyr-7b-beta.Q6_K.gguf to T4, a free GPU on Colab.
It’s pretty fast! I get 28t/s.
https://colab.research.google.com/gist/chigkim/385e8d398b40c7e61755e1a256aaae64/llama-cpp.ipynb
I ran telegram bot with llama.cpp backend with this notebook https://colab.research.google.com/drive/1nTX1q7WRkXwSbLLCUs3clPL5eoJXShJq?usp=sharing
how do you reboot it after the coolab dies
Just run it again.