Peer-review plays a quinessential role to establish quality standards and highlight the most interesting
works. While it may be tempting to judge a manuscript based on its entertainment value, science is not a talent show that seeks the most impressive results, but a search for truth and knowledge. Hence, a more openminded approach is needed:
Relevant: Is the topic linked to other relevant topics or practically useful itself?
Progress: Is there something interesting that cannot be learned from the literature?
Safe: Is there nothing harmful that could mislead and misinform the reader?
Accessible: Considering the subject matter, is it relatively easy to read and understand?
Replicable: Would experts be able to independently arrive at the same conclusions? (e.g., by verifying proofs / running similar computational/user studies)
In contrast, reviewers might instead be tempted to look at the manuscript through the lens of the prior work,
their own ideas and opinions. After all, they are distinguished authors and are asked to use their expert knowledge and opinions to review a manuscript.
Sadly, this leads to an especially subjective and narrowminded view of the manuscript,
which may then be reinforced by a boilerplate set of questions such as the following:
Unmotivated: Is the problem not an established problem within this research community?
Incorrect: Is there some internal inconsistency or some inconsistency with prior works?
Incremental: Does the work seem to share ideas with prior work?
Lack of improvement: Does the work fail to show strong improvement using established methodologies?
Trivial: Do the ideas in the work not look very complicated and like something the reviewer could have done themselves?
While all these points are relevant to reviewing, it is essentially more focused on the reviewer and their expert knowledge, but not necessarily on the truth:
An overlooked problem can still be very relevant.
Internal inconsistencies should be of course avoided, but it would be desirable to break free from prior works that are flawed.
Most ideas are not as novel as they seem. Clearly presenting links to prior works can be a lot more useful.
Established methodologies can be flawed (e.g., non-predictive theory or focusing on one objective while ignoring others).
Simple methods that are logically presented are easier to use, while arriving at the simple view itself requires a good understanding that may be difficult to reach (much easier in hindsight, especially for experts)
Reviewing in the Open
An interesting idea to help authors and reviewers to reflect upon the process is to open up the reviews to the public. This is for instance practiced in machine learning conferences such as ICLR and NeurIPS.
Such an approach maintains the anonymity of reviewers for the most part and introduces a minimal amount of accountability into the process, as sloppy and presumptious reviews may reflect poorly on a conference.
While most computer science conferences move towards double-blinded reviewing,
it may also be useful to consider experimenting with non-anonymous reviewing. Ideally, multiple models could coexist such that
authors and reviewers are given the choice. Everything being out in the public could potentially hurt some relationships and put too much stress on reviewers,
but anonymity also has well-known disadvantages and disassociates reviewers from the valuable work they do. Which approach works best depends a lot on the authors and reviewers involved.
If everyone involved is motivated by a search for knowledge, then a system is barely needed. If instead everything is just about the egos and careers of reviewers and authors, even the best system can only do so much.
If the problem is more of a cultural nature, it likely cannot be solved behind closed doors and would require to shine a light on the problem. In conclusion: A look behind the curtain could be very helpful.