What do we mean when we talk about journal impact? Digging into citations and quality in scientific publishing.

2017 median citation distribution

‘Impact’ is one of those words that can simultaneously provoke hope, anger, anxiety, and derision among scientists, publishers and research funders alike. We expect the full range of emotions following last week’s release of the latest journal Impact Factor stats.

When journals talk about their impact, it is incredibly important for us to clarify what we mean. Are we talking about the collective internal impact of scholarly articles as measured through the scholarly currency of citations or are we talking about other less quantifiable but no less important measures of a journal’s reach and influence in the scholarly community and beyond?

Now that PeerJ and PLOS ONE both have an Impact Factor of two, this seems as good a time as any to dig further into what quality scientific publishing means in the 21st century and how it can be measured and improved on in the future. What further information can we provide our authors to demonstrate the quality of the journal?

By now, many are aware of the limitations of the Impact Factor, especially as a measure for determining the quality of an individual research paper. Nonetheless, it remains a metric of importance to many researchers. As signatories of the San Francisco Declaration on Research Assessment (DORA) we at PeerJ are keen to provide a richer view of journal performance and have been open and transparent about our citation distribution. Indeed, we provide information on citation distribution and median citation rates on our home page and article pages. Here’s a quick look at our current citation distribution data:

As an academic-centered publisher, we also seek to understand why it is that so many authors continue to rely on the Impact Factor when deciding where to publish. First, it is worth considering what it is about the Impact Factor that is actually useful for researchers. It may be because their promotion and tenure committees still rely on it (or are perceived to still rely on it) or it may be because it is simply an easy shorthand for understanding journal quality in a crowded market. The fact that there are so many different metrics designed to capture ‘journal impact’ suggests this question is still open for debate and deserves further attention from publishers.

Given the lessons learned from an over-reliance on the Impact Factor, we should be vigilant to ask what exactly is being measured, what are the indicators being used for, and who are these journal metrics actually for. What would a researcher-focused metric for journal quality look like?

Assuming a low-hoop score is preferred, PeerJ’s hoop-free submission system would definitely rock this metric!

Answers will certainly vary and every journal has to decide what to prioritise, but given current trends in publishing and our own author survey data, we are aware of two primary considerations of importance for scientists today: high-quality peer review and a broad, far-reaching audience. Later blog posts will focus on each of this aspects in greater detail, but it is worth considering more generally what these characteristics have to do with ‘journal impact’ today.

Peer review is the bedrock on which journal reputation should rest. It is the mechanism through which science is evaluated and thus remains a central priority of our operations. But what makes a quality peer review process? Review focused entirely on efficiency over a fair and constructive process would be inadequate. We find that a balance of thoroughness, responsiveness, transparency and efficiency is this best way forward. Our editorial system is designed with this balance in mind to support our authors, editors and reviewers in this process.

PeerJ‘s editorial model is based on post-publication peer review history transparency. Authors can choose to make their peer review history public and reviewers can choose to sign their reviews. The shift to wider transparency in science won’t happen overnight and it is important for journals to move with researchers and not against. But steps to encourage more sunlight where there has previously been little is a step towards a lasting, positive impact.

A more external consideration of a journal’s impact is audience. Authors want their work to be read. Open access alone provides a great boost of exposure to scientific research. But an added benefit of PeerJ’s reach is that with such a multidisciplinary scope, researchers are able to get their work read by a similarly broad audience. Community journals are certainly effective for reaching particular specialists, but in an age of increasing collaboration across disciplines, a journal like PeerJ may be a more suitable venue for sharing results widely across the scientific community.

The Impact Factor is neither the only way nor a particularly good way for researchers to assess journal quality. But it is not enough for journals to sit back and decry the influence of the Impact Factor without digging further into why it is being used and what their authors are looking for when it comes to deciding where to publish. Further posts in this series will continue to explore aspects of journal quality, how our own authors decide where to publish, and what we are doing to ensure PeerJ remains a high-quality journal, by researchers and for researchers.

You may also like...

  • Jan A. Veenstra

    I am not simply not smart enough to understand this. If the 90 percentile of papers get 18.6 citations, that means that ten percent of PeerJ papers are cited at least 18.6 times after two years. That 10 percent of PeerJ journals should then contribute 1.86 to the impact factor. Similarly, if the 75 percentile is 10 citations, that means that another 15 (25-10) percent is cited at least ten times, those publications should contribute another 1.50 to the impact factor. In a similar fashion the papers between the 50th and 75th percentile should still add 0.25 * 5 = 1.25 to the impact factor and even those between the 25th and 50th percentila add at least 0.25 * 2 = 0.5 to the impact factor. In other words, the impact factor should be at least 5.11. Can the difference between 2.2 and 5.11 entirely be attributed to the increase in number of papers published over time ?

    • Sierra Williams

      Hi Jan, as far as I know the exact details on how the Impact Factor is calculated is still not entirely transparent so difficult to reproduce, but one big reason for the difference here between the metrics is that the Impact Factor is calculated based on a yearly avg citation rate whereas we were looking at the past four years due to the two year citation lag. Certainly don’t want to give the impression that the two numbers represent the same thing! Rather, I was making the case for this alternative metric as a more holistic portrayal of the journal and also of course the shortcomings of the IF. More useful I think would be comparison of median citation distribution across other journals.

  • casey32654

    Scientific publishing is now more popular in the world. There are more people are like to get more well publicity those we can get from here. To get success in the business such kind of technology is more useful for us.