• 0 Posts
  • 23 Comments
Joined 1 year ago
cake
Cake day: October 30th, 2023

help-circle

  • To create quants of new models, one has to create Hessians for it and it uses several GB of RedPajama to calibrate these. Generating Hessians for Mistral is taking 17 minutes per LAYER on my 3090. I’ll see if it can even finish later. Much later. That’s over 16 hours just to quantize a 7B model, yikes.

    The paper for this is one of the worst for me in years, full on “I know some of these words.” I didn’t think 8-dimensional sphere packing was going to be in my attempted light reading for the night.

    P…S.: Rollback to transformers 4.34.0 or edit the code in hessian_offline_llama.py and change all instances of

    attention_mask = model.model._prepare_decoder_attention_mask(
    

    to

    attention_mask = _prepare_4d_causal_attention_mask(
    

    and add an import to the top of the same file.

    from transformers.modeling_attn_mask_utils import _prepare_4d_causal_attention_mask
    








  • At very least, you should be able to merge any 2 models with the same tokenizer via element-wise addition of the log probs just before sampling. This would also unlock creative new samplers. IE instead of adding logprobs, maybe one model’s logprobs constrains the other’s in interesting ways.

    What, run two models at once? This doesn’t seem cost-effective for what you’d get.

    Most merges that are popular are weight mixes, where portions of different models are averaged in increasingly complex ways. Goliath is a layer splice, sections of Xwin and Euryale were chopped up and interweaved together. This is the kind of merge I’m interested in but getting useful models out of the process is way more art than science.





  • I’ve tested pretty much all of the available quantization methods and I prefer exllamav2 for everything I run on GPU, it’s fast and gives high quality results. If anyone wants to experiment with some different calibration parquets, I’ve taken a portion of the PIPPA data and converted it into various prompt formats, along with a portion of the synthia instruction/response pairs that I’ve also converted into different prompt formats. I’ve only tested them on OpenHermes, but they did make coherent models that all produce different generation output from the same prompt.

    https://desync.xyz/calsets.html




  • Thanks for the model, it’s really nice to have some synthia magic on a Yi-34B 200K base.

    Part of the generation from your suggested prompt:

    The magnetic field of our planet is generated by an iron-nickel core that rotates like a dynamo, creating electric currents which in turn produce the magnetic force we experience as compass needles pointing northward when held still relative to this field’s direction over time periods measured in years rather than seconds or minutes because it varies slightly due to solar wind interactions with upper layers known collectively as “ionosphere.”

    I found this particular output unintentionally hilarious because it reminds me a lot of the reddit comments I type out then delete because it’s just some overexplainy run-on gibberish.



  • llama_in_sunglasses@alien.topBtoLocalLLaMAAMD vs Inel
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    If you’re planning on running the models entirely on the GPUs, your choice of CPU won’t really affect the speeds you are getting. I’d go with the Intel since this is your first PC build, I built a 7950X rig a couple months ago. I didn’t have problems getting it to boot, but it absolutely had a fit over running 4 sticks of DDR5-6000 at their rated speed. The rated speed is really only valid for 2 sticks.


  • llama_in_sunglasses@alien.topBtoLocalLLaMAAMD vs Inel
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    The lopsided CCUs in X3D parts are not the same as the ones on 7900X/7950X. The cache ensures that you need a scheduler that can put loads that need it on the cache-enabled portion and that’s asking a lot from a scheduler. The AMD parts without extra cache don’t suffer from this issue… it’s why I got a 7950X, but the 7900X is also fine and all three of these CPUs will be entirely limited by memory bandwidth if used for CPU inference.