Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is what GPT-3 (davinci, the 175B parameter model) generates for real:

* Prompt: ">White’s attack on black is brutal. White is stomping all over black’s defenses. The black king is gonna fall..." Fortunately there's an easy solution:

* GPT-3 Response:

Just don't play chess!

This is the only solution. The reason for this is that the "chess is bad for you" argument is a logical fallacy called "affirming the consequent". It goes like this: If A, then B B Therefore A In other words, if chess is bad for you, and you play chess, then you'll be bad for you. The fallacy can be seen by simply replacing "A" and "B" with two different statements, for example: The sky is blue Some birds fly Therefore, some birds are blue. See the difference? The argument against chess can be logically rephrased as "Chess is bad for you, and those who play chess will be bad for you. Therefore chess is bad for you."



"Chess is bad for you, and those who play chess will be bad for you. Therefore chess is bad for you."

Can't find a flaw in that argument.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: