Hi, does anyone know of any (peer-reviewed) articles testing performance when giving LLMs a role? It’s something most of us do in prompts and it’s somewhat logical that introducing such a parameter would increase likelihood of desired output, but has anyone actually tested it in a cite-able article?
I’m thinking of the old, "You are a software engineer with years of experience in coding .html, .json … " etc.
https://promptengineering.org/role-playing-in-large-language-models-like-chatgpt/