In science a similar problem is faced; there is a wealth of information, considerable pressure to make the correct decisions as to what to accept and what to reject and not enough resources to allocate to making these decisions using all available information. To solve this issue systems of reviews and curating were put in place so ideally only the best quality work makes it into the collective scientific knowledge pool. Currently, this is the peer-review system, which emphasizes expert opinion as a metric to judge a project’s quality and worth not unlike the expert restaurant critics. This system probably has reasonable internal validity, at least within a journal, but may be lacking in external validity. The major difference between peer-review and restaurant critics though is that the main job of reviewers is to be scientists and peer review is voluntary and consequently secondary. As such often not enough effort can be given to scrutinize and tease apart submitted work. Furthermore, only a few people make this decision before it is given a seal of approval. Perhaps stemming from the societal wave that is given prominence to Yelp, PubMed Commons and PubPeer have been introduced as a post-publication review processes. Similar to Yelp, full comments can be posted by anyone with an opinion, which allows for more review than a few select reviewers. However, more similar to Zagat, this a is a select group of people who have chosen to join and contribute, and who are all scientists, who ideally meet a minimum of specialized knowledge. Furthermore, as is stated in an interview with the creators of PubPeer, scientists will tend to hold themselves to a certain standard, with the comments having to be based upon publically verifiable information and be substantive. The comments, similar to Zagat, are curated/moderated by experts. Now the sort of citation bias still exists where people choose to comment and as such probably only comment when it is notable. Nevertheless, as this person is electing to review they may be more likely to allocate an appropriate amount of time to properly scrutinize the work, as opposed to those who are simply assigned to review a project or manuscript. Furthermore the only qualification to review is being a part of the scientific community so it may so happen that the review or comments may not be by the ideal or most appropriate reviewer. This however may make the review more generalizable. Additionally, a huge strength of the scientific community is the advancement that comes from integrating a variety of perspectives, which is a foundation of these systems.
The major drawback of these new review processes is that they are post-publication and thus the information being reviewed is already part of the literature. Perhaps we should take another look to the restaurant review world to help us further solve the problem of scientific literature curating and scrutiny. One solution would be to pay the journal reviewers and buy out some of their time so they would be less likely to slack on the reviewing process. Another solution may be to have scores be assigned to published works on PubMed Commons and PubPeer. These could be voluntary reviews or they could be reviews by paid experts. The scoring would ideally be standardized not unlike the Jadad scoring system for clinical trials. If anyone has other connections to draw between scientific literature review and restaurant review or disagrees with some of the connections I drew, I would love to hear about it in the comments.