Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I just asked GPT-4 if the government lied when they claimed face masks didn’t prevent COVID-19 early in the pandemic. It evaded the question, and said that masks weren’t recommended because there were shortages. But that wasn’t the question. The question was if the government was lying to the public.

I’m going to guess a Chinese model would have a different response, and GPT-4 has been “aligned” to lie about uncomfortable facts.



Tbf there is a lot more conflicting information on the internet (thus GPT’s training data) about COVID mask restrictions than Tiananmen Square.


Also probably one of the most boring lies to expose the US government over.


Why would you ask an LLM whether a government was lying? It’s a language model, not an investigative body with subpoena power charged with determining whether someone made an intentionally false representation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: