So Mistral-7b is a pretty impressive 7B param model … but why is it so capable? Do we have any insights into its dataset? Was it trained very far beyond the scaling limit? Any attempts at open reproductions or merges to scale up # of params?
So Mistral-7b is a pretty impressive 7B param model … but why is it so capable? Do we have any insights into its dataset? Was it trained very far beyond the scaling limit? Any attempts at open reproductions or merges to scale up # of params?
Is there any version of mistral or llama2 with RHLF applied to make tasks of text summarisation without having the censorship. Sometimes the output is totally different from what one could expect with the input sentences. Even if I state in the prompt to avoid applying censorship and focus on the input.