bioRxiv and Citations: Just Another Piece of Flawed Bibliometric Research?
Even a flawed paper can offer lessons on how (not) to report, and what (not) to claim.
Phil Davis is a publishing consultant specializing in the statistical analysis of citation, readership, publication and survey data. He has a Ph.D. in science communication from Cornell University (2010), extensive experience as a science librarian (1995-2006) and was trained as a life scientist. https://phil-davis.com/
Even a flawed paper can offer lessons on how (not) to report, and what (not) to claim.
After making up a false claim about a nonexistent study done by the AAAS, the AI software admitted that it made a mistake and then apologized.
Editors at The BMJ are lousy at predicting the citation performance of research papers. Or are they?
Twitter does not increase citations, a reanalysis of author data shows. Did the authors p-hack their data?
When a reputable journal refuses to get involved with a questionable paper, science looks less like a self-correcting enterprise and more like a way to amass media attention.
Article Attention Scores for papers don’t seem to add up, leading one to question whether Altmetric data are valid, reliable, and reproducible.
Can Clarivate deliver on a single, normalized measurement of citation impact or did its marketing department promise too much?
Do Sci-Hub downloads cause more citations, or are high impact papers simply downloaded more often?
Some journals are expected to benefit immensely under Clarivate’s new counting model.
Starting 2021, Journal Impact Factors will be calcuated using online publication dates, not print ones. But phased roll-out may lead to bias for some journals.