Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I want to add some nuance to this.

> Plainly, basing psychological research on subjects who are mostly undergraduates at United States universities is not a good idea

Good for what, or whom? The original BBS article shows that a slew of psychological results, even those from the most robust subfields like perceptual processing, are not so universally constant as they were previously assumed to be. From the point of view of a particular paper on the processing of visual illusions, for instance, this is bad news insofar as it violates the data from that paper; and bad insofar as models based on that data will now require an asterisk.

I maintain that this isn't a bad thing. In the grand scheme, it would obviously be better if models of cognition that grew out of psych research were 'correct' and took into account all relevant issues from the very start, so that (for instance) we knew that environmental geometry would give rise to perceptual foreshortening and lengthening effects in one case and not another.

But this isn't the way the logistics of the field works; and the 'increasing efforts to broaden the samples' you mention is still an infinitesimal drop in the bucket in 99% of the cases. I work in the psych department of a major research university and I'll tell you that screwing down the demographic requirements of our investigations would not lead to fuller and more apt models, it would lead to an utter absence of the experiment to begin with, as we lack the money to ship in enough !Kong tribesmen and Mongolian sheepherders to round out our sample. And even if we could somehow get all those folks, it would lead to the utter absence of most experimental effecss: the results of such disparate effects, for reasonable sample sizes, would look like noise.

Fine, you might say; you shouldn't be doing science anyway, that's the whole point -- it's not science if you're skewing your investigation so heavily. But considering the logistical realities, and the cultural and institutional history that led to the emergence of an infrastructure for the pursuit of the scientific enterprise in the first place, it doesn't seem so bad to have a body of 'rigorous' knowledge that can then be amended, and indeed that is rigorous enough for its practitioners to realize, in response to surprises, that it ought to be amended.

To come back to the article, we have years of research on the ultimatum game where subject behavior under various manipulations has been carefully mapped out; and the existence of these precise results on this admittedly WEIRD dataset are why we can see departures from those results and take note of them; and think about how different cultural constructs might account for the violations of the model; and think about how we might adapt the model in response.

Put more briefly, having a lot of data on a small and unrepresentative population is a useful tool in the quest for building good models, so long as we realize, from time to time, the ways in which the models are inadequate.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: