June 3rd, 2011
In astronomy, and most academic fields, research is published in so-called “peer reviewed” journals. These are the publications that count. At least one other scientist, and sometimes several depending on the field, has reviewed all the papers. That review process means an expert in the field has decided that the research is worth publishing and has often asked for corrections/improvements before recommending the paper for publication.
(I’ve written about peer review several times in the past, if you want to read more. See also this article.)
What I want to write about here, briefly, is some thoughts on improving peer review.
Right now it’s pretty random in some ways. An editor picks a potential referee, sends them the title, author list, and abstract, and acts them if they can write a report within 2-4 weeks. If the potential referee declines, they often suggest someone else. Editors are not experts in every subfield out there, and sometimes don’t pick the best people. Or they pick the best person for one of the topics covered in a paper, but papers can span multiple subfields and it can be difficult or impossible to find someone expert enough in every area. Or, the editor gets a real expert to referee the paper, and that referee turns out to be slow, sloppy, or just a plain obnoxious dick. If the busy editor doesn’t look closely or get a comment from the paper’s authors, they might not make a note to skip that referee in the future. Oh, and the referee can reveal themselves to the authors or remain anonymous, so you can’t easily request that an editor not send the paper to a particular super-obnoxious referee.
It’s possible to referee a paper without being obnoxious, but some people don’t seem to know it. You don’t make personal comments. You don’t comment on the authors themselves, only their work. You do point out errors, missing information, issues with writing, etc., but you don’t have to include insults or put-downs while doing so (which is especially annoying when the referee turns out to be the one making the mistake, which happens).
If the initial referee’s report is negative, usually revisions follow followed by additional rounds of refereeing. I’ve asked for 3 rounds of revisions myself in the past. Sometimes the referee finds the paper totally unacceptable and the authors can request a second opinion from a new referee.
I know some editors and have talked with them about refereeing issues before. They’re smart, capable people, but no one is perfect and the system is less than perfect or accountable.
From the referee’s side…they get little out of the process. Serious researchers are expected to referee papers, but no one will ever get a promotion or miss a promotion because of their refereeing, or lack thereof. Editors ask us to referee, and we carve out some time from our schedules to do it. I’ve taken as little as an hour refereeing a paper (great, short, clear, simple paper). I’ve also taken a week (flawed but not fatally, badly written, long, overly complicated paper). I’m usually too busy to referee, but almost always say yes if it’s a paper I think I’m a real expert to judge, something that sounds fishy and wrong, or something that looks interesting and educational so that I learn something for my time.
Anyway, lots of introductory information and background so far. Here’s my suggestions:
1. Journals should provide feedback forms for authors to rate their referee. Some obnoxious referees provide great feedback and that should be acknowledged, but some kind referees also miss errors that the authors catch.
2. Journals should keep track of the number of papers refereed and the ratings of each referee.
3. Journals should put bad referees on probation if their ratings are too low, and give them feedback about why they are on probation. This might make some of the assholes out there think twice before they make unprofessional personal comments. Repeated probations could lead to being blacklisted, which would be a win-win for everyone I expect.
4. Journals should issue awards to the most prolific and highest rated referees. The rewards need not be monetary. Just a “Top 100 Reviewer for the Astrophysical Journal 2011” title would be something a scientist could put on his CV that might actually mean something when going up for tenure or a merit raise.
This is a little extra work for the journal editors, who are not usually well paid for their effort, but the referees are not paid at all and have to weigh their effort versus their own research. If no one was willing to referee papers, the system would break down. As it is, it isn’t clear that the best people are encouraged to referee, and that the worst people are discouraged (or encouraged to improve).
Next time I’m chatting with one of my editor friends, I’ll make these suggestions. One journal starts to do it, and starts getting their first-choice referees more often, and there will be pressure for others.
You can follow any responses to this entry through the RSS 2.0 feed. You can skip to the end and leave a response. Pinging is currently not allowed.
As a biologist our papers usually reviewed by 2-3 referees. In case of one good one bad usually a tie breaker can be requested. That’s my 2 cents on the background.
I like your suggestions. That makes the refereeing not completely free. Also besides rewards there can be negative effects of getting consistently low feedback. Maybe a hiatus period of submitting papers to that specific journal.