I got tired of slow cpu inference as well as Text-Generation-WebUI that’s getting buggier and buggier.
Here’s a working example that offloads all the layers of zephyr-7b-beta.Q6_K.gguf to T4, a free GPU on Colab.
It’s pretty fast! I get 28t/s.
https://colab.research.google.com/gist/chigkim/385e8d398b40c7e61755e1a256aaae64/llama-cpp.ipynb
how long does it stay alive/online?
Koboldcpp also has an official colab, https://colab.research.google.com/github/LostRuins/koboldcpp/blob/concedo/colab.ipynb
Great work. It would be nice to have some caching and automatically detecting and re-running the Colab session, as others have pointed out.
I ran telegram bot with llama.cpp backend with this notebook https://colab.research.google.com/drive/1nTX1q7WRkXwSbLLCUs3clPL5eoJXShJq?usp=sharing
how do you reboot it after the coolab dies
Just run it again.