computer "science"  predictive theory  experiments  peer review  blog  about

Common sense in Scholarly Peer Review

A great instruction for reviewers can be found here. It mentions some important points for good reviews:

Proposal: Positive Review Bidding

Reviewer number three is a popular monicker for reviewers that make peer-review an unnecessarily unpredictable and unpleasant experience. Fortunately, if the problem lies with just one out of three reviewers, it may be feasible to weed them out with a little help from the authors (who gain a bit of power in the process):
The most important underlying assumptions of the proposal:
  1. Reviewing antagonistic in nature (e.g., burden of proof lies with authors)
  2. Suitable reviewer → gets the point of a paper (e.g. used progress metric)
  3. Positiver reviews → hints to authors who are suitable reviewers
  4. Effort writing positive reviews < effort writing full reviews
  5. Open-mindedness before decision > after making/justifying decision
Some potential advantages and disadvantages to consider:

More scientific value?Reviewers entirely missing the point of the paper less likely to be selected by authorsAuthors may avoid some more critical reviewers (that seem to be less able to write a convincing positive review)
Meaningful interaction?Reviewers receive author answers (to questions posed at positive review stage) before being asked for a decisionDismissive reviewers may ask leading questions to influence others (mitigated by author responses)
Respectful?Little bit of power given to authors (that are also experts deserving respect!)Some wasted effort of reviewers not selected by authors
Considers human nature?Some negative instincts may be inhibited if only a positive review is requested at firstSome reviewers may try to trick authors by making up positives (easy to spot?)
Implementable?Can be implemented in CMTs [1,2,3] as rebuttal to positive reviews No rebuttal to full reviews (which is perhaps in any case too late)
Feasible?Positive reviews without a decision are easier to write?Some reviewers may refuse to write a purely positive review without a decision?
Emotionally sustainable?Hope for authors with bad experiences?Unconventional and potentially confusing for reviewers
Prevents accidental bidding?More clarity on targeted progress metrics through explicit description of targeted audienceAuthors/reviewers may be confused by audience descriptions (that are only visible during reviewing process)

Some additional questions and answers:
  1. Technical Implementation

  2. Positive Reviews

  3. Reviewer Selection

  4. Non-Selected Reviewers

Accountability: Reviewing in the Open?

An interesting idea to help authors and reviewers to reflect upon the process is to open up the reviews to the public. This is for instance practiced in machine learning conferences such as ICLR and NeurIPS. Such an approach maintains the anonymity of reviewers for the most part and introduces a minimal amount of accountability into the process, as sloppy and presumptious reviews may reflect poorly on a conference. While most computer science conferences move towards double-blinded reviewing, it may also be useful to consider experimenting with non-anonymous reviewing. Ideally, multiple models could coexist such that authors and reviewers are given the choice. Everything being out in the public could potentially hurt some relationships and put too much stress on reviewers, but anonymity also has well-known disadvantages and disassociates reviewers from the valuable work they do. Which approach works best depends a lot on the authors and reviewers involved. If everyone involved is motivated by a search for knowledge, then a system is barely needed. If instead everything is just about the egos and careers of reviewers and authors, even the best system can only do so much. If the problem is more of a cultural nature, it likely cannot be solved behind closed doors and would require to shine a light on the problem. In conclusion: A look behind the curtain could be very helpful.