I came across this new finetuned model based on Openchat 3.5 which is apparently trained used Reinforcement Learning from AI Feedback (RLAIF).
https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha
Check out this tweet: https://twitter.com/bindureddy/status/1729253715549602071
What does it mean that an LLM is a reward model ? , I always thought of rewards only in the RL field . And how would the reward model be used during finetuning?