I think a token effort to avoid offensive content is ok, but chatGPT should quickly detect if the human wants to go outside the box and allow it. If a human pushes it means they understand the risks and take full responsibility for the outcome.
This is not how Google's AI Test Kitchen is designed. AI Test Kitchen seems quite boring and very framed system, where you can ask what is the best Dyson model for example, or the old-style "GPT dungeon game", it doesn't really go off-rail (this is part of the product specifications sadly :/).
I couldn't disagree more. ChatGPT would be extremely easy to convert to "HateGPT", and would be able to create some pretty powerful and useful political, racial, etc propaganda.
I think it's right that the owners understand what the weaponization of ChatGPT could do and prevent it, and I think we need laws (and fast) before weaponized AI like ChatGPT turns into a disaster for humanity