Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't think we have found most of them. I think we make it look like we've found most of them because we keep throwing money at these crap studies.

Bear in mind that my criteria are two-dimensional, and I'll accept either. By all means, go back and establish your 3% effect to a p-value of 0.0001. Or 0.000000001. That makes that 3% much more interesting and useful.

It'll especially be interesting and valuable when you fail to do so.

But we do not, generally, do that. We just keep piling up small effects with small p-values and thinking we're getting somewhere.

Further, if there is a branch of some "science" that we've exhaused so thoroughly that we can't find anything that isn't a 3%/p=0.047 effect anymore... pack it in, we're done here. Move on.

However, part of the reason I so blithely say that is that I suspect if we did in fact raise the standards as I propose here, it would realign incentives such that more sciences would start finding more useful results. I suspect, for instance, that a great deal of the soft sciences probably could find some much more significant results if they studied larger groups of people. Or spent more time creating theories that aren't about whether priming people with some sensitive word makes them 3% more racist for the next twelve minutes, or some other thing that even if true really isn't that interesting or useful as a building block for future work.



So 3% is not interesting but the difference between 10^-7 and 10^-8 probability that there is no effect is interesting somehow?


Meta analysis after enough small studies show the effect exists.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: