• 0 Posts
  • 1 Comment
Joined 1 year ago
cake
Cake day: November 24th, 2023

help-circle
  • Another is the potential for misuse of knowledge, such as creating napalm"

    IMHO these examples of “I tricked ChatGPT into telling me how to build a bomb!!” are fun, but you can find this information online anyway. This is mainly a PR problem if screenshots of company XY’s new chatbot spewing problematic content are circulating on social media.

    The point is rather that any information the LLM has ever seen (during training or in its prompt) can be leaked to the user, no matter how thorough your finetuning or prompt engineering is.