Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Guidance needs to be more specific. Failing to use AI for search often means you are wasting a huge amount of time, ChatGPT 5.2 Extended Thinking with search enabled speeds up research obscenely, and I'd be more concerned if reviewers were NOT making use of such tools in reviews.

Seeing the high percentage of usage of AI for composing reviews is concerning, but, also, peer review is an unpaid racket which seems basically random anyway (https://academia.stackexchange.com/q/115231), and probably needs to die given alternatives like ArXiV and OpenPeerReview and etc. I'm not sure how much I care about AI slop contaminating an area that already might be mostly human slop in the first place.



That's a wrong way of using AI in peer review. A key part of reviewing a paper is reading it without preconceptions. After you have done the initial pass, AI can be useful for a second opinion, or for finding something you may have missed.

But of course, you are often not allowed to do that. Review copies are confidential documents, and you are not allowed to upload them to random third-party services.

Peer review has random elements, but thats true for all other situations (such as job interviews), where the final decision is made using subjective judgment. There is nothing wrong in that.


> A key part of reviewing a paper is reading it without preconceptions

I get where you are coming from here, but, in my opinion, no, this is not part of peer review (where expertise implies preconceptions), nor for really anything humans do. If you ignore your pre-conceptions and/or priors (which are formed from your accumulated knowledge and experience), you aren't thinking.

A good example in peer review (which I have done) would be: I see a paper where I have some expertise of the technical / statistical methods used in a paper, but not of the very particular subject domain. I can use AI search to help me find papers in the subject domain faster than I can on my own, and then I can more quickly see if my usual preconceptions about the statistical methods are relevant on this paper I have to review. I still have to check things, but, previously, this took a lot more time and clever crafting of search queries.

Failing to use AI for search in this way harms peer review, because, in practice, you do less searching and checking than AI does (since you simply don't have the time, peer review being essentially free slave labor).


By "without preconceptions", I mean that your initial review should not be influenced by anyone else's opinions. In CS, conference management software often makes this explicit by requiring you to upload your review before you can see other reviews. (You can of course revise your review afterwards.)

You are also supposed to review the paper and not just check it for correctness. If the presentation is unclear, or if earlier sections mislead the reader before later sections clarify the situation, you are supposed to point that out. But if you have seen an AI summary of the paper before reading it, you can no longer do that part. (And if a summary helps to interpret the paper correctly, that summary should be a part of the paper.)

If you don't have sufficient expertise to review every aspect of the paper, you can always point that out in the review. Reading papers in unfamiliar fields is risky, because it's easy to misinterpret them. Each field has its own way of thinking that can only be learned by exposure. If you are not familiar with the way of thinking, you can read the words but fail to understand the message. If you work in a multidisciplinary field (such as bioinformatics), you often get daily reminders of that.


Researchers use it to write the papers themselves: https://www.science.org/content/article/far-more-authors-use...


Then on top of that there's the slop that comes from the university's PR department, where they turn "New possibly-interesting lab result in surface chemistry" into "Trillion dollar battery technology launched".

(Now that I think about it, I haven't seen much battery hype lately. The battery hype people may have pivoted to AI. Lots of stuff is going on in batteries, but mostly by billion-dollar companies in China quietly building plants and mostly shutting up about what's going on inside.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: