• 0 Posts
  • 1 Comment
Joined 1 year ago
cake
Cake day: November 14th, 2023

help-circle
  • That all sounds like the typical symptoms when you feed too much generated content back into the context buffer. Limit the dynamic part of your context buffer to about 1k tokens. At least that’s been my experience using 13B models as chatbots. With exllama you just add “-l 1280”. Other systems should offer similar functionality.

    If you want to get fancy, you can fill the rest of the context with whatever backstory you want.