Hi, does anyone know of any (peer-reviewed) articles testing performance when giving LLMs a role? It’s something most of us do in prompts and it’s somewhat logical that introducing such a parameter would increase likelihood of desired output, but has anyone actually tested it in a cite-able article?
I’m thinking of the old, "You are a software engineer with years of experience in coding .html, .json … " etc.
Cheers, that’s exactly what I was looking for!
This little bit right here is very important if you want to do work regularly with an AI
Specifying a role when prompting can effectively improve the performance of LLMs by at least 20% compared with the control prompt, where no context is given. Such a result suggests that adding a social role in the prompt could benefit LLMs by a large margin.
I remembered seeing an article about this a few months back, which lead to my working on an Assistant prompt, and it’s been hugely helpful.
I imagine this comes down to how Generative AI works under the hood. It ingested tons of books, tutorials, posts, etc from people who identified as certain things. Telling it to also identify as that thing could open a lot of pieces of information to it that it wouldn’t otherwise be looking at.
I always recommend that folks set up roles for their AI when working with it, because the results I’ve personally seen have been miles better when you do.
this is super helpful!