Let’s all be clear – why citation distribution matters and what we’re doing about it
As yet another year of impact factor data is published, the same discussions arise as to whether such metrics are damaging the ecosystem of scientific research and the publishing of results. For many, including ourselves, the impact factor represents an incentive system that is burdening science with a skewed evaluation of research, and the journals it is published in. This in turn just adds to the wider problems of reproducibility and the speed and cost of publishing research. PeerJ was borne out of frustration with these very issues, and whilst we understand that the Impact Factor is very important still for some, we believe individual research articles are best assessed on their own merits rather than the aggregate citation count for the entire journal in which the work is published.
We signed the San Francisco Declaration on Research Assessment (DORA) for this very reason. So rather than worrying about the Impact Factor of the journal, it is more important to us that a researcher’s work gets the visibility it deserves (be that through citations or any other relevant metric), whilst delivering this through an exceptional publishing experience from submission to publication. An excellent recent blog post on this very topic by Professor Stephen Curry draws attention to what more journals can and should be doing to highlight citation distribution of their articles – and we agree.
So rather than just stand by and comment from the sidelines just what is it we’re doing to be more open with our citation data at PeerJ? Well, we already include citation data from CrossRef and Scopus on our articles, plus we’ve added 2-year ‘median citation’ data on all articles. You’ll find this data on the top right of each article page as shown below.
In addition to this we also publish citation counts across our percentiles. For instance, if you were to take the typical time frame of two years then the citation distribution for PeerJ as of May 2015 is:
- 90th percentile: 10.8 citations
- 75th percentile: 8 citations
- 50th percentile (median): 4 citations
- 25th percentile: 2 citations
The two-year citation distribution paints a more realistic picture of what can be expected from publishing an article in PeerJ. This transparency puts pressure on us to ensure we’re working for all authors to maximize the visibility of their research. We can’t just hide behind a single metric of average citations which may be the result of a single highly visible article skewing the data. If citation distributions were to become the norm across all journals then it could have a profound impact (no pun intended) on science. Besides, there’s a reasonable argument that if distributions are considered good statistical practice when doing science (vs averages) then it should probably also be used when measuring the bibliometrics of science as well.
We also include article level metrics on all of our article pages to ensure that each article gets individual credit and the recognition it warrants. Our article level metrics enable anyone to see the number of visitors, views and downloads an article has received, and also links to the referral source of visitors. No big time lags, and no chasing for links to the coverage – it’s all in one place.
In addition to citation data transparency we also enable post publication commenting through our Q&A system allowing for ongoing open dialogue with an author once an article is published. All articles published in PeerJ are indexed through global indexing services, and PeerJ Computer Science is now being indexed by dblp and Google Scholar.
Whilst there is always more we can be doing, we hope our current efforts encourage other journals to move towards a more transparent process of individual article citation data. Let’s all be clear – for the benefit of science it’s time for journals to be more open about citation distribution.