Today, PeerJ published a study entitled “Internet publicity of data problems in the bioscience literature correlates with enhanced corrective action” by Dr Paul Brookes. This article deals with the topic of how corrections to the scientific literature are handled and so we felt it would be informative to have Dr Brookes provide some insight into the publication.
PJ: What are the conclusions of your study?
Paul Brookes: “That internet publicity of data problems in published papers is associated with a greater level of corrective action taken on those papers, compared to papers for which there was no public discussion of their alleged shortcomings. Although it is tempting to speculate that this is a cause-and-effect relationship, there are a number of caveats to the study, in the form of external factors that could have accounted for some of the difference between the two groups of papers. However, the difference was so large (7-fold), that the most likely explanation is the publicity itself.”
PJ: Do you think the publication of your results will increase the number of corrective actions?
PB: “I would hope so, although of course every journal and publisher has their own policies in terms of how they choose to respond to allegations of problem data, from either anonymous or named informants. The data do suggest that the journals are paying attention to the public realm of discussion – sites such as PubPeer and PubMed Commons – which is a good sign.”
PJ: Your study has some important limitations including the fact that it will unlikely be reproduced independently. Could you comment on that?
PB: “Yes. Unfortunately the data set is rather sensitive in nature, and in the most extreme cases it could be misconstrued as representing a series of allegations of misconduct. As such, I cannot share the data with other researchers, which means the study could not be repeated by anyone from outside. I might however be willing to share the data with other trusted researchers in academia, following the establishment of a data sharing agreement between the two parties, to ensure that it remains confidential and is not mis-used.”
PJ: Why did you choose to publish your data in PeerJ rather than some other venue?
PB: “I’ve been a member since the journal was first announced, and signed up early on for the editorial board (although due to the low volume in my subject area I’ve yet to actually edit any papers). As early adopters, my lab’ already has a paper in PeerJ (number 48) and it has been very well received.
Another important factor was the nature of the paper itself. Many of the publications in the data set are in regular journals, and so publication in one of those journals might be quite difficult. Is a journal editor going to accept a paper that specifically calls out their own journal for failing to act appropriately regarding problematic data? As such, submission to PeerJ was an easy choice because none of the data-set papers are in PeerJ due to the journal’s relatively young age.”
PJ: Tell us about the process that went into the publication decision for your study?
PB: “Due to the unique nature of the data set, a lot of thought went into the review process itself, with back-and-forth conversations and ‘phone calls with the editors before the paper even got sent out for review. This was actually a very useful process because much of the ethics behind the paper needed to be put in the right context before being seen by reviewers. A key example was the discussion of IRB approval. Because some people might classify the study as behavioral or human subjects research, then they might argue it requires IRB approval. However, the fact that I conducted the work as a private citizen, outside the realm of my University faculty appointment, meant that this was not possible. The study was also performed ‘post-hoc’ and the only data set available is blinded. Furthermore, all of the raw information is available in public domain (PubMed and journal websites) for those who care to look. As such, the study was deemed outside the realm of an IRB mandate. Nevertheless, the publication of the raw data could hold the potential to cause harm, and so careful discussions took place throughout the peer review process to ensure that the final paper does not include this data.”
PJ: In your opinion, which would be the best ways to improve the corrective system in the scientific literature?
PB: “I am a firm believer in open data and post-publication peer review (PPPR). In the former case, with resources such as FigShare, data storage is essentially free (or very very cheap), so there is really no reason not to archive all the data for every paper. In the case of PPPR, this is an area with a lot of exciting developments. A couple of key debates underway right now are the question of anonymity – should commenters have to use their real names – and the question of who’s going to pay for and police all of this extra activity. As academics we all have to deal with changes in the way things are done administratively (e.g., having to put everything in NIHMS to meet the public access standards when preparing grant progress reports). My view is that as open data and PPPR become more common, it will just become part of the normally accepted process of publishing. As such, the journals and portals that make these processes easier will be the big winners.”
PJ: Dr Solomon, the Academic Editor on the paper had this to say about the publication: ”Publicizing anonymous allegations concerning data integrity in research raises many very difficult ethical issues. This was by far the most challenging review I have handled in over a decade as an Academic Editor. I would like to thank the five individuals who reviewed this manuscript for their thoughtful feedback and Drs. Brookes and Binfield for their effort and patience in working through several rounds of revisions for this important paper.”
PB: “If I can add to this – there were many stages at which an editorial team with less scruples would have simply said this paper was “not worth it” and given up. I’m very grateful to the editors and reviewers at PeerJ, for sticking with it and actually making this happen.”