I came across this new finetuned model based on Openchat 3.5 which is apparently trained used Reinforcement Learning from AI Feedback (RLAIF).
https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha
Check out this tweet: https://twitter.com/bindureddy/status/1729253715549602071
heheh i can’t read that any more… i really have become very prejudiced when comes to that… to be honest, when it comes to any comparison with GPT-4.
People have really to understand that even GPT-4 has been aligned, lobotomized and it has been massively downgraded in terms of its perfomance – due to security reasons (what is understandable for me), but anyway this thing still is an absolute beast. if we consider all the restrictions GPT-4 has to undergo, all the smartness at openAI, all the ressources at microsoft and so on, we have to realize that currently nothing is really comparable to GPT-4. Especially not 7B models.
I’ve seen the “… beats GPT-4” enough times that now whenever I see a title that suggests a tiny model can compete with GPT-4 I see it as a negative signal; that the authors are bullshitting through some benchmarks or some other shenanigans.
It’s annoying because the models might be legitimately good models for being open and within their weight class but now you’ve put my brain in BS detecting mode and I can’t trust you’ve done good faith measurement anymore.
Yeah I just roll my eyes and continue onwards
Yeah I dont think authors are intentionally bullshitting or intentionally doing “benchmark cosmetics”, but maybe it’s more lack of knowledge on whats going on in terms of (most of) benchmarks and their the image that has become ruined in the meantime.
Sure, but name-dropping the biggest name in the game and comparing yourself favourably to it is a big swing. It’s either a naive at best marketing claim or it’s untrue.
There are SO many models “bullshitting through some benchmarks or some other shenanigans” that I’m cooking my own benchmark system LOL.