One of my students recently asked “what do you guys actually do when you are not teaching classes?”. While answering “research”, and pointing him to the scientific publications on the website of my team was a good answer, it still isn’t (even if ignoring committee work and administration) quite a complete answer …
One of the at times fun and interesting, but most often rather tedious, things that academics spend their time on is “peer review” of the work of other academics.
Peer review is being done prior to it being published in a journal or presented at a conference, and is one of the two key components of the QA process of science (the other is “the independent repetition experiment”, but that’d be the topic of another posting).
Peer reviews are, therefore, incredibly important to do – doing them, and doing them well, is part of the “community service” that any academic morally owes.
It is also incredibly hard work to do a good review. Reading the paper is the easy bit, as is commenting on trivial things such as writing style, etc. — though that’s really the job of the editor of the journal, not the scientific peer reviewers, a good review will point out such things as well.
It is usually also easy, albeit somewhat time-consuming, to verify that experiments are properly documented, that sufficient data is presented to allow drawing the conclusions that the paper draws, to check the maths for errors, etc. That’s part of the QA process, and is of course hugely important.
And, it is also relatively easy to enumerate all the things (hypotheses, use cases, premisses, etc.) that you, as a reviewer, disagree with. That is even slightly (but only slightly) helpful to the authors, who ultimately receive the reviews: it points out what can be potentially improved, though it doesn’t really tell the authors how to do so.
A frustrating amount of reviews do, however, just do the above: check that the paper is written in reasonable English, check that there’re no egregious mistakes made, produce a list of “issues”, and then issue a 👍🏽 / 👎 verdict worthy of Cesar: Accept or Reject.
The most valuable part of a peer-review is, however, also that which is really hard to provide, and which requires a LOT of effort on the part of the peer reviewer: constructive suggestions – both on how to improve the areas that you as a reviewer take issue with, and on how to exploit and take further the findings, that you as a review appreciate.
I’d argue that it is, however, also the whole point of having the review done by peers — that is, by fellow practicing scientists: to have a scientific dialogue with another expert in your field.
One of the most enraging reviews I got in my career so far (but, dear peers, this is not a challenge), about a decade ago, was simply:
I disagree with some of the premises of this paper, and I do not like the results.
That’s all there was to that review. No details as to which premises saw disagreement. And what is the scientific meaning of “I do not like the results”? Facts are facts – at least in science, that’s still the case.
(This was a review for a conference paper, btw., and it does also speak volumes about the technical program committee of that conference, that they actually let that review through to the authors. The paper in question was shortly thereafter published at a venue with a more serious review process).
But, doing a good review takes time and effort. Getting a head start of my “academic community service” for 2019, I reviewed a bunch of papers earlier this month – to the point that my review pile is now empty.
One of those papers was about forwarding in computer networks (which I know quite a bit about) specifically in Information-Centric Networking (“ICN” – which I know about, but am not an expert, and so do not know all the state-of-the-art) whilst preserving privacy (which do I know something about) using some weird operations on elliptic curves (about which I know next to nothing).
Doing a good review of this paper required catching up with what had happened lately in the ICN space, as well as having coffee with a cryptologist-colleague, who is an expert on elliptic curves and who could confirm some things that I had thought “odd” in the paper, could point out other oddities that sprung at his trained eye, and catch me up on what relevant bibliography within his field, to also consider.
In the end, doing the review took a bit more than half a day for me … to figure out not just “what I disagree with”, but to get to a point where I hope that I — while I ultimately recommended that the paper not be published as-is, provided constructive suggestions for the authors to go forward.
Equally important, it was an occasion triggering me to catch up with the latest developments in a domain that I otherwise follow only peripherally (ICN), as well as to “get educated” by a colleague, in a scientific domain distant from my own.
All in all, half a day well spent — for myself and, I hope, also for the authors of the paper.
Now, my own “moral compas” is to do about as many peer reviews, as I ask for in a year — that is, if I author and submit n papers in a year, I’ll try to also do at least n peer-reviews.
By now, on January 31, 2019, I’ve completed 8 peer reviews — which, I guess, also means that I’ve somehow set a lower bar for how many papers I have to produce in 2019 …