I tried to apply a lot of prompting techniques in 7b and 13b models. And no matter how hard I tried, there was barely any improvement.
I tried to apply a lot of prompting techniques in 7b and 13b models. And no matter how hard I tried, there was barely any improvement.
Every model will react differently to the same prompts. Smaller models might get confused with complicated prompts designed for GPT4.