For example one of the "data thugs" (James Heathers) wrote a tool that could detect impossible numbers in psych papers, like means/std devs that couldn't possibly have been the result of any allowable combination of results. Some very high percentage of papers failed, I think it was 40%.
And of course psychology isn't the worst. Epidemiology is worse than psychology. The methodological problems there are terrifying. Good luck getting public policy to even accept that it's happened, let alone do anything about it.
"Specifically, the mean of the 28 participants in the experimental condition, reported as 5.19, cannot be correct. Since all responses were integers between 1 and 7, the total of the response scores across all participants must also be an integer in the range 28–196. The two integers that give a result closest to the reported mean of 5.19 (which will typically have been subjected to rounding) are 145 and 146. However, 145 divided by 28 is 85714217.5, which conventional rounding returns as 5.18. Likewise, 146 divided by 28 is 42857121.5, which rounds to 5.21. That is, there is no combination of responses to the question that can give a mean of 5.19 when correctly rounded."
(that example is a fictional one but the same issue arises elsewhere)
No. Virtually no scientific papers with errors are ever retracted, in any field. That's how you can end up with fields where more than half of all claims are probably false. We're told science is "self correcting" but that's just one more lie on top of so many others. In reality science doesn't get corrected even when people go above and beyond to try and correct it, as in this story.
And people wonder why faith is lost in science. I wonder if we're at the point yet where anecdotal experience is more likely on average to be correct than a study.
Ironically, one of the areas of psychology the field has struggled the most to accept is also one with the most robust results and largest effect sizes. That area is ... stereotype accuracy.
It's exactly what it sounds like and it shows that, surprise, anecdotal stereotypes people have about other people are actually pretty accurate when tested. This is not a politically correct conclusion so the hard-left academic world struggled for a long time to accept this (and arguably still does).
"This chapter discusses stereotype accuracy as one of the largest and most replicable effects in all of social psychology. This chapter is divided into three major sections. The first, History of Obstacles to Social Psychology Accepting Its Own Data on Stereotype Accuracy, reviews some of the obstacles social psychology has faced with respect to accepting that stereotype (in)accuracy is an empirical question, and that the empirical data do not justify assumptions, definitions, or declarations that stereotypes are inaccurate. The second, The Empirical Assessment of Stereotype (In)Accuracy, summarizes what is now an impressive body of literature assessing the (in)accuracy of racial, gender, age, national, ethnic, political, and other stereotypes. The third, Stereotype (In)Accuracy: Knowns, Unknowns, and Emerging Controversies, summarizes broad and emerging patterns in that body of literature, highlighting unresolved controversies, and identifying important directions for future research. (PsycInfo Database Record (c) 2020 APA, all rights reserved)"
For example one of the "data thugs" (James Heathers) wrote a tool that could detect impossible numbers in psych papers, like means/std devs that couldn't possibly have been the result of any allowable combination of results. Some very high percentage of papers failed, I think it was 40%.
And of course psychology isn't the worst. Epidemiology is worse than psychology. The methodological problems there are terrifying. Good luck getting public policy to even accept that it's happened, let alone do anything about it.