TL;DR:

Hey everyone, I am excited to share with you the first release of “DreamGen Opus”, an uncensored model that lets you write stories in collaborative fashion, but also works nicely for chat / (E)RP.

Specifically, it understands the following prompt syntax (yes, another one — please don’t hate :D):

(Description of the story, can also optionally include information about characters) 

...


(Instructions as you write the story, to guide the next few sentences / paragraphs)

You can find more details about prompting the model in the official prompting guide, including a few examples (like for chat / ERP).

The initial model is based on Mistral 7B, but Llama 2 70B version is in the works and if things go well, should be out within 2 weeks (training is quite slow :)).

The model is based on a custom dataset that has >1M tokens of instructed examples like the above, and order of magnitude more examples that are a bit less instructed.

How to try it out

The model should work great with any tool that supports the Mistral 7B base model. It will work well with oobabooga/text-generation-webui and many other tools. I like vLLM.

Using vLLM

  • Install vLLM following the instructions in the repo
  • Run python -u -m vllm.entrypoints.openai.api_server --host 0.0.0.0 --model dreamgen/opus-v0-7b

Using DreamGen.com website (free)

You can also try the model on dreamgen.com for free (but it requires a registration with email).

What’s next

I believe that for story telling & character creation it’s especially important to have access to the model weights, otherwise you run the risk of losing your plot or virtual companion (as already happened a few times before on various closed platforms that suddenly changed their rules or got shut down by their API provider). Hence DreamGen.

Here’s a high level overview of what I would like to do next under the DreamGen umbrella:

On the model side:

  • (Soon) Larger story models
  • Fine tune the model for even better character chat & roleplay
  • Longer context windows, at least for smaller models (8-16K depending on how experiments go)

On the application side, I am thinking about these features:

  • Character editor, chat & roleplay
  • Ability to share your stories privately & publicly (not sure about this one, to be honest :))
  • Image generation to go alongside with story generation & chat
  • API so that you can use the model more easily if you don’t have a GPU

For all of these, I would love your input! You can vote on the roadmap here.

For more updates, join the community server or follow updates on Twitter.

  • trollsalot1234@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Alright, super technical review time: I got this running on the potato I connect to Reddit with, even though I usually only try gguf and only on days when the sun is shining and God seems happy. It made my Gtx 1070 ti cry (see I told you I would be technical!), but it worked. Then I altered a demo prompt, and it wrote me a story at about 1 token every 3 seconds where Little Red Riding Hood drank pee. So I’m giving this model a score of 8.6 dead babies, which is better than Tiefighter.

    • DreamGenX@alien.topOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Wow, amazing, thanks for giving it a try GGUF and other quants are coming, so your computer should have an easier time soon! :)

      What’s the maximum possible dead babies score? :D

    • vitlaska@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Amazing. Reminds me of my favorite story testing prompt: [insert character] tricking Dr. Manhattan into drinking their piss at an Irish Pub. Can’t wait to try it out with this one.