Review History


To increase transparency, PeerJ operates a system of 'optional signed reviews and history'. This takes two forms: (1) peer reviewers are encouraged, but not required, to provide their names (if they do so, then their profile page records the articles they have reviewed), and (2) authors are given the option of reproducing their entire peer review history alongside their published article (in which case the complete peer review process is provided, including revisions, rebuttal letters and editor decision letters).

New to public reviews? Learn more about optional signed reviews and how to write a better rebuttal letter.

Summary

  • The initial submission of this article was received on June 2nd, 2013 and was peer-reviewed by 2 reviewers and the Academic Editor.
  • The Academic Editor made their initial decision on July 2nd, 2013.
  • The first revision was submitted on August 1st, 2013 and was reviewed by 2 reviewers and the Academic Editor.
  • The article was Accepted by the Academic Editor on August 12th, 2013.

Version 0.2 (accepted)

· · Academic Editor

Accept

Congratulations for the manuscript being accepted. Do send more high quality manuscripts to PeerJ again.

·

Basic reporting

NA

Experimental design

NA

Validity of the findings

NA

Comments for the author

All my comments have been addressed.

Reviewer 2 ·

Basic reporting

The authors have addressed my concerns.

Experimental design

No comments

Validity of the findings

No Comments

Version 0.1 (original submission)

· · Academic Editor

Major Revisions

Dear Authors,I hope all of you can contribute to the necessary revisions needed so that the same peer reviewers can re-review them.

Reviewer 1 ·

Basic reporting

The manuscript reports findings on a very important problem in biomedical literature and will serve to be very useful in many dimensions.

Experimental design

The authors have carefully chosen 5 areas or reagents that are often not reported accurately in the literature. Here are some experiments that would add value or strengthen the claim of the paper.
1) In papers where the reagent was not easy identifiable, it would be interesting to know how many other papers have cited this paper and claimed to have prepared their reagents using the methods used by that paper.
2) The authors don't say much about whether there is a bias in the number of papers they selected. Is 135+86+17 a good representation, is that enough?How many of these papers had no relevance to this study, i.e were not experimental papers requiring reporting of constructs, antibodies etc.
3) Would it have been possible to randomly ask a subset of authors from papers that don't have indentifiable resources to see why they did not report the details. Is it because the journals don't have a structured form to report these details? From the data presented, it almost seems like the stringency has no bearing on the # of identified resources.
4) Is it possible that there are way too many reagents to report for any given paper and authors report the resources for just a select set of reagents that are most relevant to understanding the point of the paper?
4) In terms of recommendations, the authors seems to suggest a lot of different options, but none of them are very decisive. So far journals have been successful in requiring submission of sequence to GenBank or structure coordinates to PDB and authors for the most part stick to this requirement irrespective of the name of the journal. How has this been successful and is there something to learn and extend for other resources?

Validity of the findings

The findings reported in this paper are valid.

·

Basic reporting

No Comments

Experimental design

No Comments

Validity of the findings

1. p.6, on the discussion of the difficulty of identifying cDNA or peptides related to a gene: have the authors also encountered issues with identifying the species in which the sequence was isolated ? There have been reports that this is problematic.
2. I am a bit surprised about the identification of organisms: usually yeast, frogs, worms, and flies are relatively easy to unambiguously identify; moreover, there is no data for human. How was the analysis done ? For example, 0 % of the yeast were identified, but is there evidence that it should have been the case, in other words, were there any yeasts papers in the set ?

Comments for the author

This paper addresses the very pertinent issue of the lack of sufficient information provided in papers to allow research to be accurately reproduced. The article is very well written, and the results are interesting and provide some quantitative measure of the extent of the problem.

Minor comments:
1. p. 7, "Statical analysis": the section title should be marked with underline the same way as 'Journal selection and classification' above.
2. p. 9, Section on Cell lines: There seems to be missing something in the sentence "A source for cell lines was rarely reported and was most common factor for their low identifiability in our study"; please rephrase.
3. At the end of the same paragraph the authors write "see methods section"; it is not clear what they refer to, since this is already the methods section.
4. p. 11: "While it is assuring" -> "While it is reassuring"

All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.