I just asked GPT-4 if the government lied when they claimed face masks didn’t prevent COVID-19 early in the pandemic. It evaded the question, and said that masks weren’t recommended because there were shortages. But that wasn’t the question. The question was if the government was lying to the public.
I’m going to guess a Chinese model would have a different response, and GPT-4 has been “aligned” to lie about uncomfortable facts.
Why would you ask an LLM whether a government was lying? It’s a language model, not an investigative body with subpoena power charged with determining whether someone made an intentionally false representation.
I’m going to guess a Chinese model would have a different response, and GPT-4 has been “aligned” to lie about uncomfortable facts.