Power User
  • Communities
  • Create Post
  • Create Community
  • heart
    Support Lemmy
  • search
    Search
  • Login
  • Sign Up
chibop1@alien.topB to LocalLLaMAEnglish · 2 years ago

Got Llama.cpp WebUI to work on Colab

message-square
message-square
6
link
fedilink
1
message-square

Got Llama.cpp WebUI to work on Colab

chibop1@alien.topB to LocalLLaMAEnglish · 2 years ago
message-square
6
link
fedilink

I got tired of slow cpu inference as well as Text-Generation-WebUI that’s getting buggier and buggier.

Here’s a working example that offloads all the layers of zephyr-7b-beta.Q6_K.gguf to T4, a free GPU on Colab.

It’s pretty fast! I get 28t/s.

https://colab.research.google.com/gist/chigkim/385e8d398b40c7e61755e1a256aaae64/llama-cpp.ipynb

  • HadesThrowaway@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 years ago

    Koboldcpp also has an official colab, https://colab.research.google.com/github/LostRuins/koboldcpp/blob/concedo/colab.ipynb

LocalLLaMA

localllama

Subscribe from Remote Instance

Create a post
You are not logged in. However you can subscribe from another Fediverse account, for example Lemmy or Mastodon. To do this, paste the following into the search field of your instance: !localllama@poweruser.forum

Community to discuss about Llama, the family of large language models created by Meta AI.

Visibility: Public
globe

This community can be federated to other instances and be posted/commented in by their users.

  • 4 users / day
  • 4 users / week
  • 4 users / month
  • 4 users / 6 months
  • 1 local subscriber
  • 11 subscribers
  • 1.02K Posts
  • 5.82K Comments
  • Modlog
  • mods:
  • communick
  • BE: 0.19.11
  • Modlog
  • Instances
  • Docs
  • Code
  • join-lemmy.org