Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's worth pointing out that most of the best science happened before peer review was dominant.

There's an article I came across awhile back, that I can't easily find now, that basically mapped out the history of our current peer review system. Peer review as we know it today was largely born in the 70s and a response to several funding crises in academia. Peer review was a strategy to make research appear more credible.

The most damning critique of peer-review of course is that it completely failed to stop (and arguably aided) the reproducibility crisis. We have an academic system where the prime motivation is the secure funding through the image of credibility, which from first principles is a recipe for wide spread fraud.



Peer review is basically Github anonymous PRs that has the author pinky swear that the code compiles and 95% of test cases pass.

Academic careers are then decided by the Github activity charts.


The whole 'pinky swear' aspect is far from ideal.

But is there an alternative that still allows most academic aspirants to participate?


> Github


Do you understand what the parent is saying? It's clearly an analogy, not a literal recommendation for all academics to use Github.


I understand, thank you for clarifying :)

My point was that academics could use Github (or something like it)


Can you write out the argument for it, or why you believe it to be a net positive change compared to the current paradigm?


> Peer review is basically Github anonymous PRs that has the author pinky swear that the code compiles and 95% of test cases pass.

It should be possible to use something like Github to *verify* "that the code compiles and 95% of test cases pass" instead of just "pinky swearing".


Based on...?


Tests and data.


I meant what is your belief that it will be successful, or even workable, based on?


The success of Github in creating software, and the success of software in advancing scientific progress.

Maybe something like nbdev.fast.ai.

In any case, it was just a thought, and likely not an original one. I would welcome it if someone tried to build this and proved it can’t be done.

Thank you for the stimulating discussion!


Yes the characteristics of Github are understood.

What is the actual line of argument that demonstrates this success/usefulness/etc... can be reproduced in your envisioned system?

> I would welcome it if someone tried to build this and proved it can’t be done.

It's impossible to prove a negative, so this doesn't make sense. Did you mistype?


>It's worth pointing out that most of the best science happened before peer review was dominant.

It's worth pointing out that most of everything happened before peer review was dominant. Given how many advances we've made in the past 50 years, so I'm not super sure everyone would agree with your statement. If they did, they'd probably also agree that most of the worst science also happened before peer review was dominant, too, though.


Our advances in the last 50 years have largely been in engineering, not science. You could probably take a random physics professor from 1970 and they'd not sweat too much trying to teach physics at the graduate level today.


But a biology professor from that time period would have a lot of catching up to do, perhaps too much, especially (but not only) if any part of their work touched molecular biology or genetics.



Thanks so much for posting those. The essays were great, but I didn't see them before.


But there is zero reason why the definition of peer review hasn't immediately been extended to include:

- accessing and verifying the datasets (in some tamper-proof mechanism that has an audit trail). Ditto the code. This would have detected the Francesca Gino and Dan Ariely alleged frauds, and many others. It's much easier in domains like behavioral psychology where the dataset size is spreadsheets << 1Mb instead of Gb or Tb.

- picking a selective sample of papers to check reproducibility on; you can't verify all submissions, but you sure could verify most accepted papers, also the top-1000 most cited new papers each year in each field, etc. This would prevent the worst excesses.

PS a superb overview video [0] by Pete Judo "6 Ways Scientists Fake Their Data" (p-hacking, data peeking, variable manipulation, hypothesis-shopping and selectively choosing the sample, selective reporting, also questionable outlier treatment). Based on article [1]. Also as Judo frequently remarks, there should be much more formal incentive for publishing replication studies and negative results.

[0]: https://www.youtube.com/watch?v=6uqDhQxhmDg

[1]: "Statisics by Jim: What is P Hacking: Methods & Best Practices" https://statisticsbyjim.com/hypothesis-testing/p-hacking/


It seems kind of obvious that peer review is going to reward peer think, peer citation, and academic incremental advance. Obviously that's not how innovation works.


the system, as flawed as it is, is very effective for its purpose. see eg "success is 10% inspiration and 90% perspiration". on a darker side, the purpose is not to be fair to any particular individual, or even to be conducive to human flourishing at large.


yes - maybe a good filter for future academic success, which seems to be a game unto itself


academia is not about innovation, it should be trying to tend to the big self-referential kaleidoscope of knowledge.

mostly it should try to do it through falsifying things, of course groupthing is seldom effective at that.


> It's worth pointing out that most of the best science happened before peer review was dominant.

This seems unlikely to be true, simply given the growth. If you are arguing that the SNR ratio was better, that's different.


Have they done a double-blind test on the peer review system?





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: