If the training data contains statements to the effect that the model was extracted from the brain of a living walrus, that’s what it will tell you when you ask where it came from. These things aren’t self-aware in any sense. They don’t contemplate themselves or ask “who am I?”
It’s not hard to fine tune base models for any bias you want. “Zero bias” isn’t possible. There’s always some bias in the training data.