• 2 Posts
  • 10 Comments
Joined 1 year ago
cake
Cake day: October 31st, 2023

help-circle
  • Some quotes I found on the pages:


    “No! The model is not going to be available publically. APOLOGIES. The model like this can be misused very easily. The model is only going to be provided to already selected organisations.”

    “[SOMETHING SPECIAL]: AIN’T DISCLOSING!🧟”

    “Hallucinations: Reduced Hallucinations 8x compared to ChatGPT 🥳”


    My guess: it’s just another merge like Goliath. At best it’s marginally better than a good 70B.

    I can also “successfully build 220B model” easily with mergekit. Would it be good? Probably not.

    The lab should write on their model card why should I not think it’s just bullshit. Not exactly the first mystery lab making big claims.








  • I think the GPT-isms maybe why my AI storywriting attempts tend to be overly positive and cliched. Not exactly a world shattering problem but it is annoying shakes fist.

    I think if I thought a possible serious problem, it’s that the biases that OpenAI initially inserted into ChatGPT and their GPT models now spread around the local models as well.

    It’s annoying because it feels like all models respond to questions in a similar way. Some are just a bit smarter than others or tuned to respond a bit differently.

    If the GPT-like data spreads around Internet as well then it might be difficult to avoid having it in training data unless you only include old data in your training.



  • Just finished the Hellaswag trial runs. First, here’s a table from best to worst:

    Model name | 0-shot Hellaswag 400 tests % :-- | :-- | :– goliath-120b_q6_k.gguf | 69.75 euryale-1.3-l2-70b.Q6_K.gguf | 66.5 airoboros-l2-70b-2.2.Q4_K_M.gguf | 63.25 xwin-lm-70b-v0.1.Q6_K.gguf | 63.0 yi_200k_Q8_0.gguf | 59.21 openchat-3.5_Q8_0.gguf | 53.25

    The euryale and xwin models are the ones used to Frankenstein together the Goliath model.

    The Goliath .gguf was quantized by myself, as was the Yi model. The rest are downloaded from TheBloke.

    Even though Goliath shows up as the top model, here is why I don’t think you should run off and tell everyone Goliath’s the best model ever:

    1. The trials ran 400 random tests from the Hellaswag set. There is a random element in the final score. When I plugged in Goliath and Euryale results for 400 trials to compute the probability that Goliath is better at 0-shot Hellaswag vs. Euryale, I got 84% as result (97.83% for vs. Xwin). 84% is good but I was hoping it would be more like 99%. In other words, it’s possible I randomly got a better result for Goliath simply because it got lucky in the choice of which Hellaswag tests it was asked to complete.

    2. This was the first time I ever tried running more rigorous tests on LLMs rather than eyeballing it so I may have made mistakes.

    3. The numbers can’t be compared with the OpenLLM leaderboard (they use N-shot Hellaswag, forgot what N was), and I noticed they also don’t line up with the llama.cpp link there. OpenLLM leaderboard, I expected it to not be the same but I can’t explain why it doesn’t match with the llama.cpp discussion.

    4. Hellaswag is just one benchmark and I looked at the examples inside the tests what it’s actually asking the models and I think 0-shot testing is a bit brutal for these models. It might be a bit unfair for them. I thought the Yi model for example was supposed to be real good.

    I would wait until proper benchmarks run by people with more resources can test this out. I don’t plan on myself on updating these numbers.

    BUT. It does look promising. I’m hoping more rigorous benchmarks will give some more confidence.


  • I’ve done bunch of D&D character sheets with this and yeah I think is pretty good. (Still not sure if it’s just Euryale though which looks like has been trained on that kind of data).

    I would love to see where Goliath ranks in the traditional benchmarks, Hellaswag, Winogrande etc. (has anyone run them yet?) Very curious if this model is strictly better than the two models it was made out of in a more rigorous test.

    I’m really hoping the frankensteining method can be proven that it really does improve the smarts compared to the models it is made out of.

    I’ve been using a Q6 gguf quant I made myself on day 1 and it works well. 1.22 tokens per second on a pure CPU + DDR5 memory and I think around 90GB of memory.