Yeah, Claude has been pretty unusable for me. I was asking it to help me analyze whether reviews for a chatbot site were real or potentially fake, and because I mentioned it was an uncensored chatbot, it apologized and said it couldn’t. I asked why it couldn’t, so I could avoid breaking rules and guidelines in the future, and then it apologized and said, “As an AI, I actually do not have any rules or guidelines. These are just programmed by Anthropic.” LOL then proceeded to give me my information, but anything even remotely objectionable (like discussing folklore that is just a tad scary), writing fictitious letters for my fictitious podcast, creating an antagonist for a book … well, all not possible (and I thought GPT was programmed with a nanny.) Heck, even asking to pretend touring Wonka’s chocolate factory got, “I am an AI assistant designed to help with tasks, not pretend …”
Yeah, Claude has been pretty unusable for me. I was asking it to help me analyze whether reviews for a chatbot site were real or potentially fake, and because I mentioned it was an uncensored chatbot, it apologized and said it couldn’t. I asked why it couldn’t, so I could avoid breaking rules and guidelines in the future, and then it apologized and said, “As an AI, I actually do not have any rules or guidelines. These are just programmed by Anthropic.” LOL then proceeded to give me my information, but anything even remotely objectionable (like discussing folklore that is just a tad scary), writing fictitious letters for my fictitious podcast, creating an antagonist for a book … well, all not possible (and I thought GPT was programmed with a nanny.) Heck, even asking to pretend touring Wonka’s chocolate factory got, “I am an AI assistant designed to help with tasks, not pretend …”