Hey LocalLLaMA. It’s Higgsfield AI, and we train huge foundational models.

We have a massive GPU cluster and developed our own infrastructure to manage the cluster and train massive models. We constantly lurked in this subreddit and learned a lot from this passionate community. Right now, we have spare GPUs, and we are excited to give back to this incredible community.

We built a simple web app where you can upload your datasets to finetune it. https://higgsfield.ai/

There’s how it works:

  1. You upload the dataset with preconfigured format into HuggingFaсe [1].
  2. Choose your LLM (e.g. LLaMa 70B, Mistral 7B)
  3. Place your submission into the queue
  4. Wait for it to get trained.
  5. Then you get your trained model there on HuggingFace.

[1]: https://github.com/higgsfield-ai/higgsfield/tree/main/tutorials

  • fadenb@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    What do you consider “large” cluster? How many MW GPU capacity do you operate?

  • mcmoose1900@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Well, awesome. Thanks.

    I’ll be over here assembling some TV show transcripts for a fandom tune.

    Out of curiosity, is it a full finetune or a LoRA? What context length?

  • herozorro@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    please do something like this, or provide detailed example, on how an open source framework api can be added to a coder LLM.

    how do we prepare the data with code sample, docs, so the coder LLM learns it can can do code completions and answer documentation?

      • herozorro@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        but what would be the proper formatting example for code? just paste in a bunch of files from a repo? or should be more a cheatsheet format?

  • dahara111@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Registered. I am very interested and grateful to use it, but I haven’t uploaded the dataset to huggingface, so I can’t use it yet.

    And I don’t understand the new learning paradigm that is done just by registering the model and dataset.

    What is it that is running behind the scenes?
    A very simple snippet OR code would be helpful to understand.

    For example

    If I give you a model and a dataset, the code will run something like this, and under what conditions will the training be finished.

  • nomusichere@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I don’t want to be the party pooper. But this site doesn’t seem legit. Have any of you got any info other than provided?

  • MaxSan@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Just curious why you don’t plug it into bittensor? I mean, you get best of both worlds then?

  • Terrible-Mongoose-84@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Hi, is it possible to learn a RAW file? I have about 2gb of artistic text marked up with tags and titles, is it possible to train Mistral on them?