Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
A World Without Referees (2012) [pdf] (cmu.edu)
25 points by barry-cotter on May 9, 2020 | hide | past | favorite | 15 comments


I might agree with the author but why focus on a small part of the process? What is the ideal way to fund, recognize and reward scientific efforts? Has there been significant progress in each of those fronts?

May be it is my biased opinion, I feel that universities used to have a better balance in terms of exploration/exploitation by giving more space to non conformists and eccentric characters. These days, the system seems to be very skewed towards conformists.


The balance is gone, at least in the US, because of several reasons, some easily quantifiable, others more subtle or difficult to quantify.

Most of it is due to the prevalence of indirect fund systems on grants in the US. The way this is budgeted, universities bring in far more money than their actual costs, meaning there's huge incentives to prioritize grant funding over actual research. Because of huge competition and the way grants are evaluated, you have grants only weakly being related to impact or progress, and instead being targeted to what is most likely to be popular.

Another contributing factor is reliance on bibliometric indices like h indices etc. Although this seems reasonable, what it leads to is huge pressure to publish what's popular over other considerations, with lots of co-authors. I've reviewed for high-status journals have said explicitly in instructions to reviewers (and I'm only weakly paraphrasing) "don't evaluate the methodological integrity or integrity of the conclusions, only evaluate how popular this is likely to be."

What happens is you're incentivizing groupthink very heavily. Combine this with eroding of tenure (either through elimination of tenure-track lines, or protections of intellectual exploration), especially in places like biomedicine where the financial conflicts of interest are huge, it leads to big problems.

I have mixed feelings about peer review. I do think it's overrated, broken, and not transparent enough. Having done it for over a decade, I'm still being surprised by ethically questionable things that happen (ironically in my experience it seems like this actually happens the most in fields that are supposed to be more "objective", like statistics or math). On the other hand, my emerging experience with the unbridled internet is that it just reinforces the TED-ization of academics, in that things like twitter attention get mixed in with more formal evaluation of research.

Given the traditional peer review system versus some more open free-for-all, I'd choose the latter for multiple reasons, but they both have issues. In today's dysincentivised climate, the more open approach does have downsides, but maybe if the culture was fixed in other ways those concerns would be alleviated.

To address your question about what to do: there have been lots of proposals but they don't seem to get traction.

Open publication, archives, etc. seem to be moving in the right direction, but many of the grant solutions (eliminating indirect funds or making them line-item justified, lottery systems, funding based on publications through awards) haven't gotten traction.


> I've reviewed for high-status journals have said explicitly in instructions to reviewers (and I'm only weakly paraphrasing) "don't evaluate the methodological integrity or integrity of the conclusions, only evaluate how popular this is likely to be."

(Academic mathematician here)

I'm curious, how do they get away with this?

In mathematics, it's common and perfectly acceptable to turn down requests to review a paper. (You're not supposed to turn down all of them.) When I've done so, because I've already taken on too many reviewing assignments at one time, I've never gotten pushback from journal editors.

If some journal editor asked me to review a paper, and gave me instructions along the lines that you described, then I'd just tell them "I'm very sorry, but I'm unusually busy at the moment."


Those comments weren't at a math journal, but a more applied biomedical journal.

I also have started refusing requests from that journal.

The instructions I referred to werent available until well after accepting the review request, for what it's worth.


They give instructions only after you have accepted? That, too, seems odd to me. I guess that it is what it is.


The system is skewed towards conformists because the eccentric characters of old made it their point to hire unquestioning conformists to further their work while shielding themselves from any scrutiny.

And now we have a system designed to pay lip service to scrutiny and packed with conformists who made their cushy career by not rocking the boat.


The author makes some valid points, but doesn't consider downsides very carefully. In a world where there is no central publication venue, those who are "famous" (or work in famous institutions) have a very large advantage over newcomers, who can't simply "send email to their colleagues." A little more thought and discussion of this part of the problem would be useful.


> The refereeing process is very noisy, time consuming and arbitrary. We should be dissem- inating our research as widely as possible. Instead, we let two or three referees stand in between our work and the rest of our field. I think that most people are so used to our system, that they reflexively defend it when it is criticized. The purpose of doing research is to create new knowledge. This knowledge is useless unless it is disseminated. Refereeing is an impediment to dissemination. Every experienced researcher that I know has many stories about having papers rejected because of unfair referee reports. Some of this can be written off as sour grapes, but not all of it. In the last 24 years I have been an author, referee, associate editor and editor. I have seen many cases where one referee rejected a paper and another equally qualified referee accepted it. I am quite sure that if I had sent the paper to two other referees, anything could have happened. Referee reports are strongly affected by the personality, mood and disposition of the referee. Is it fair that you work hard on something for two years only to have it casually dismissed by a couple of people who might happen to be in a bad mood or who feel they have to be critical for the sake of being critical?


The peer review process is closed with loads of conflicts of interest and reviewer biases.

On a tangential note I think the reproducibility crisis is jarring. There should be a way to rank research by reproducibility.


In the current system the reviewers do not have much skin in the game, they can shoot down a good paper with very little accountability/consequences. May be there should be a system where reviews/reviews are also ranked/commented upon by the group of reviewers or much wider community.


That makes sense because thats how film critic reviews work.


In the last eight years, we’ve gotten a lot closer to this with the increasing use of preprint servers (arXiv, bioarXiv, etc).

Until recently, I was pleasantly surprised with the quality of most preprints. Most seemed roughly comparable to peer-reviewed papers, though the COVID crisis seems to have unleashed a deluge of lower quality work though, and I think we haven’t quite figured out how to do (and reward) reviewing.


I would love to see peer review done on a greater variety of research outputs, not just papers.

Why not sometimes have movies or pieces of software or physical objects or whatever is the most effective representation of the work?

We all have computers now and international shipping is (usually) pretty easy, it could expand of boundaries of research subjects and methods.


On the other hand, "hey this preprint!!" isn't exactly useful right now either.

There's a balance.


Hasn't the internet solved a lot of these problems?

For example, you can make the argument that hacker news is sort of a peer-review system for articles from everywhere - personal, academic and commercial.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: