It’s no secret that many language models and fine-tunes are trained using datasets, many of them are made using GPT models. The problem arises when many “GPT-isms” end up in the dataset. And I am not only referring to the typical expressions like “however, it’s important to…”, “I understand your desire to…”, but I am also referring to the structure of the outputs in the model’s responses. ChatGPT (GPT models in general) tend to have a very predictable structure when in its “soulless assistant” mode, which makes it very easy to say “this is very GPT-like”.
What do you think about this? Oh, and by the way, forgive my English.
As an AI language model I do not have an opinion on GPT-isms polluting datasets. However it is important to remember to respect other people and work together to achieve the optimal outcome.