A shady “call for paper” e-mail
I received a “call for paper” e-mail today. The publisher claimed they became interested in my work in a conference I just attended, and they also wrote if I am interested in being a member of the editorial board, I could send my CV back.
The message is quite encouraging, except that I did not present a poster nor give a talk on that event, and I am no major author of any publication. You might have guessed: it was an invitation from some shady and presumably predatory publisher. The journal even has a name that is indistinguishable from an official journal of the Royal Pharmaceutical Society.
We, in academia, are more or less in desperate pursuit for publications. This is not a surprise at all, given that published articles are almost the only thing counts in grant application, in one’s search of a position, or even in a friendly talk around a snack table in a conference. While we know the number and brand of one’s publication are inaccurate reflection of how good a scientist is, establishment in science is notoriously difficult to quantify, and the need to compete with one another breeds the sometimes blind belief on peer-review process, despite its limitation.
Because we are so eager to get stuffs published, convincing people to pay some fee to get your work published becomes easy.
I am not sure if there’s a fix for this. It’s hard to imagine how people would stop looking for a score or a label to quickly judge someone, so if it is not academic publication, it could well be other random stuff. We could do our best to find a better surrogate of scientific contribution, but we must also keep in mind that when we start measuring something, the behavior of it changes. As a result, Sisyphean efforts seem necessary: We must either keep improving current system or developing new ones.
Speaking improving peer review, recently eLife is experimenting something beyond their current cooperative review model, and the Company of Biologist are promoting visibility and discussion over preprints by devoting a commentary section. I am optimistic over how they are trying to make peer review involve more people, but whether the necessary amount of involvement to make peer review better exceeds the attention limit of human is a concern.
Anyway, I guess the solution is some old-fashioned general motto for scientists: Keep examining if some score reflects what it should and always hold the courage to challenge existing beliefs.