Review History


To increase transparency, PeerJ operates a system of 'optional signed reviews and history'. This takes two forms: (1) peer reviewers are encouraged, but not required, to provide their names (if they do so, then their profile page records the articles they have reviewed), and (2) authors are given the option of reproducing their entire peer review history alongside their published article (in which case the complete peer review process is provided, including revisions, rebuttal letters and editor decision letters).

New to public reviews? Learn more about optional signed reviews and how to write a better rebuttal letter.

Summary

  • The initial submission of this article was received on January 13th, 2017 and was peer-reviewed by 2 reviewers and the Academic Editor.
  • The Academic Editor made their initial decision on February 27th, 2017.
  • The first revision was submitted on June 16th, 2017 and was reviewed by the Academic Editor.
  • The article was Accepted by the Academic Editor on June 22nd, 2017.

Version 0.2 (accepted)

· · Academic Editor

Accept

Thank you to the reviewers for excellent suggestions, and to the MS co-authors who took up every one of the reviewers' suggestions (including the statistical suggestions). This MS has definitely been strengthened through concerted efforts by all involved and is now suitable for publication in PeerJ.

Version 0.1 (original submission)

· · Academic Editor

Major Revisions

Thank you for your revised MS/rebuttal, and thanks to the reviewers for their work on this. One reviewer suggested minor revisions, the other major. In the case of the major revision recommendation, none of the requested changes require further experimentation, but do require a response regarding design and statistical methodology.

I also note that reviewer 2 states that:

"I feel this work constitutes an important contribution to the literature and will, hopefully, stimulate further work investigating these ideas."

...and reviewer 1 states that:

"I feel this manuscript is much improved! It is well-written and the subject is interesting and relevant."

I tend to agree with these assessments, so while I am giving this a major revisions recommendation – in order to remain on the conservative side of the decision fence – I will assess the revised MS and rebuttal by the authors in terms of whether it needs to go back to reviewer 2 for one more look at that time.

Thanks again to all of you for your work and professionalism in this review process.

·

Basic reporting

The manuscript is well-written and easy to follow. I'm sorry my initial comment about the punctuation was unclear. (Or if it implied the authors are not strong writers - I only meant that having an outside eye can help to catch small issues.) I was indeed referring to the heavy use of commas and semi-colons. Now that they've been reduced, I find the manuscript reads more smoothly.

Two small issues, in my opinion, remain:

(1) I'm still not sure I'm on board with Figure 4. I definitely prefer the new caption. However I still feel that representing both effect size and p-value using the same symbol can't really be justified.

(2) I stand by my previous comments and the comments of Review 2 about the discussion. The description of previous experiments without relating back to your results (552-591) is not what one usually expects to find in a discussion.

Experimental design

I feel your reply to my comments and your edits do a good job of addressing my criticism of your experimental design. I especially like that the fencing treatment has been re-framed to be more about the fencing itself, rather than dispersal.

Validity of the findings

As you have pointed out in your reply to the reviewers and editor, your statistical philosophy diverges somewhat from the status quo. I feel that is totally fine as long as you're clear about your statistical methods and include your raw data, which you've done.

I agree with you that identifying specimens to the level of order is adequate for the questions being posed, especially in the context of a comparison to CW99.

Comments for the author

I feel this manuscript is much improved! It is well-written and the subject is interesting and relevant.

One last small note: in the acknowledgements section, my name is "Shaun", not "Shawn"! :)

Reviewer 2 ·

Basic reporting

no comment

Experimental design

1. I see no problem with presenting and discussing Year 1 data. However, time is confounded with supplementation rate for [Year 1] vs [2 & 3], so I’m not comfortable with the inclusion of Year 1 data in analyses calculating interaction terms for Time and Fencing. Why not assess the time and fencing interaction terms using only year 2 & 3 data? I see the authors have calculated the Time interaction term using only years 2 & 3 in a Supp Mat, but I feel Years 2 & 3 should comprise the basis for all analyses and inferences drawn from the interaction terms. Doing so would not detract from the experiment’s ability to assess whether the response to supplementation changes over time.


2. Investigating the effect of fencing is stated as a central goal of this manuscript. I do question the ability of this experiment to detect effects of fencing with respect to the points below - but I do not object to publication on these grounds, as long as the authors are abundantly clear about these limitations:

(i) capacity of fencing to prevent movement, due to the enclosures’ open top design.

If the authors state they test for effects of fencing, I take some issue with the statement that ‘ecologists use fencing to impede movement’. I encourage publication if the authors are more explicit and upfront about how much migration their fencing treatment can be truly expected to prevent. The text in lines 72-83 does not fulfill this need.

If fencing was not a good barrier to movement, surely there would be greater variation in fenced vs open plots for univariate analyses at the functional group level – the authors may wish to comment on this.


(ii) statistical power (five replicates each of open / fenced within each supplementation treatment).

Providing a power analysis for this aspect of the experiment (instead of discussing the experiment’s overall power, line 584) would provide more suitable justification and context for their use of p = 0.15. I see no issue with using p=0.15 per se.

Validity of the findings

3. Fig. 4: I do find that adjusting effect size for p-value is a bit odd and obscures the results. Readers should not have to pore over the Supp Mat tables to get the full picture. Why not reserve arrow thickness to indicate absolute effect size and simply add a number of stars beside each arrow to indicate significance level?

Comments for the author

4. I commend the authors for endeavoring to replicate the well-cited Chen & Wise 1999 study. This study was important in establishing bottom-up limitation of multiple trophic levels in detritus-based food webs, and is often cited as a key example documenting a clear effect. I see a twofold value of this manuscript: (i) it attempts to replicate the above work, and (ii) it establishes and attempts to test hypotheses for variation in published experimental findings. I feel this work constitutes an important contribution to the literature and will, hopefully, stimulate further work investigating these ideas.


5. Comments about the food web aspect of this work:

Line 444, “particularly in Year 1”: I do not see this in Fig. 4.


I see a few interesting aspects of the results at the gross community level; the authors may wish to mention these, at their discretion:

Lines 149-150: I don’t recall seeing the authors comment on this in the discussion. This is interesting because it suggests that significant detrital consumption occurred in supplemented plots, despite apparently few responses across taxonomic groups by the end of Year 3.

The strongest effects, according to the current method of establishing effect size in Fig 4, manifested among a few of the most numerically dominant groups. This further supports bottom-up limitation of primary consumers at a gross community level.

Again judging from Fig 4: Among taxa that were evaluated by Chen & Wise 1999, results are largely replicated among primary consumers and the mixed trophic level in year 2 data, the first year with comparable supplementation rate to CW99. In the final year, biomass then seems to accumulate in a few numerically-dominant primary consumers. It would be useful to those studying food webs to know whether there occurred a shift in relative abundance towards these taxa. If this is the case, then the overall result at the gross community level remains unchanged between years 2 & 3, and there is simply a reduction in diversity.

If the authors’ rainfall hypothesis is correct, densities of strict fungivores (or primary consumers capable of consuming fungal hyphae) should have risen in years of high rainfall – was this the case?


6. Minor comments:
Lines 74-83: could be removed, significantly reduced, or moved to the discussion.

Line 385: I believe “Year 3” should be “Year 2”

Line 428: I believe “Fig. S4.4” should be “Fig. S5.4”

Line 718: the meaning of this sentence is not clear to me

All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.