Review History


All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.

Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.

View examples of open peer review.

Summary

  • The initial submission of this article was received on March 17th, 2018 and was peer-reviewed by 3 reviewers and the Academic Editor.
  • The Academic Editor made their initial decision on April 22nd, 2018.
  • The first revision was submitted on June 3rd, 2018 and was reviewed by 1 reviewer and the Academic Editor.
  • The article was Accepted by the Academic Editor on June 19th, 2018.

Version 0.2 (accepted)

· Jun 19, 2018 · Academic Editor

Accept

We appreciate the quick turnaround with your revisions. The manuscript is greatly improved. I am pleased to accept this paper for publication.

# PeerJ Staff Note - this decision was reviewed and approved by Patricia Gandini, a PeerJ Section Editor covering this Section #

·

Basic reporting

The paper meets all basic reporting requirements.

Experimental design

The experimental design is sound.

Validity of the findings

The analysis and presentation of findings are strong.

Additional comments

Thank you for your extensive revisions, I believe the paper is much stronger following your revisions. I wish you the best!

Version 0.1 (original submission)

· Apr 22, 2018 · Academic Editor

Minor Revisions

All three reviews suggest minor revisions. I agree with this assessment as the manuscript is a solid contribution to the field and well introduced. However, the methods and discussion sections require substantial improvement.

In particular, please respond to Reviewer 1 question about fixed vs. random effects in your model. Reviewer 2 asks for clarification about the superiority of one platform over the other and suggests greater consistency in terminology used. Reviewer 3 raises many good points that will improve the Discussion section, specifically the need to mention the possibility of double or over-counting with both platforms.

Figure 1 could be improved as suggested by Review 2. I would also like to see a legend item for 2-3 classes of water depths and an indication of the inaccessible region(s) of the study area.

We look forward to your re-submission.

Reviewer 1 ·

Basic reporting

Basic reporting is solid.

Experimental design

Experimental design is solid, but analysis methods require more clarification and justification as detailed in general comments.

Validity of the findings

Validity is reasonable, but analysis methods require more clarification and justification as detailed in general comments.

Additional comments

General comments

line 108: Was the statistical replicate the ten different sampling occasions, multiplied by the two methods, multiplied by the number of species?

line 113: Were the counts from several observers for each count method and sampling occasion summed before analysis?

line 113, 276: Are you certain that the higher counts are more accurate that the lower counts? Is either method more likely to have double counting?

line 117: Were all the counters good? Were they randomly assigned to methods and ice conditions?

line 150: Why was the interaction between ice and method coded as a random effect, as described in the statistics section? There would only be four combinations of ice and method to estimate slope variance. Seems like it would be better to treat this as fixed:

count ~ global intercept + random species intercept + ice + method + (ice x method)

This model could easily produce the output found in table 1 and figure 2D. Looking at table S2 it looks like you treated the interaction as fixed, after all, as it is found in an F-table. What's up with that?

Also, keep in mind that the models described so far assume the same slopes per species. However, you spend a few paragraphs and figure panels describing variation among species (line 168-190, 231-263). If variation among species was indeed important, why not have fixed species terms in your model? For example:

count ~ global intercept + ice + method + species group + (ice x method) + (ice x species group) + (method x species group) + (ice x method x species group).

That is the model implied by figures 2A-C, and would allow you to better interpret the importance of the different colored slopes. It isn't critical that you do this, but it makes the species variation discussion more believable.

line 151: The log-link function is the standard for Negative Binomial regression. Why the identity link in your study? Were the count data really normally distributed?

line 151: For the ice variable, you jump from 0% ice (0) to > 70% ice (1)? Is this correct? What about in-between values? Didn't exist in your study?

line 152: The Arkusz2 and Arkusz3 tabs are empty on you XLSX data sheet.

line 155: How did you estimate standard errors and confidence intervals found in various tables? From model output? Simple summary statistics?

·

Basic reporting

The paper meets all basic reporting requirements.

Experimental design

The paper needs improvement in the description of methods, as noted below:

- Please add a section to the methods describing methodology for cost accounting.
- Please add a statement to the methods acknowledging that because the two count methods could not be compared to a completely unbiased estimator, but instead could only be compared to each other, the method giving higher total counts was considered to be the superior (if this is true).
- Please explain in the introduction or methods why you grouped the species detected into the groups that you used (i.e., different panels in fig 2). Are the different groups of different conservation concern, or expected to exhibit different detectability with regard to ice and observation method? Why not simply group all species together?

Validity of the findings

The paper requires improvement in discussion of the findings, as noted below:

- The manuscript relies on an unchallenged assumption that the survey method that gives higher counts is the superior method. The manuscript would benefit from a more full discussion of potential sources of bias in both aerial and ground survey methods, or at the least an acknowledgment of the assumption that higher counts is assumed to indicate a superior method.
- I am not convinced that the main conclusions drawn by the authors are supported by the analysis in the paper. The authors conclude in line 211 that the major factor influencing census results was ice cover. However, figure 2 shows major differences between the two survey methods (ground vs aerial) regardless of ice presence, for some species. I suggest that the authors should either revise their analysis to focus on broader patterns (for example, include species as a random effect in an overall analysis of count ~ census method + ice condition + census method * ice interaction), or be more conservative in their conclusions (for example, acknowledge that the detection of some species is more strongly affected by ice cover than others).
- Figure 2 needs improvement. I suggest plotting points rather than lines, identifying species or groups by point shape or color. I suggest adding confidence intervals. I suggest grouping species into biologically meaningful categories and displaying the means of each group rather than displaying each species individually, leaving the individual species means to be communicated in the S1 table.

Additional comments

Small suggested changes are noted below:

- Please clarify the meaning of "overall result" in lines 87-88: does this refer to "number of birds detected"?
- Please clarify the meaning of "ice conditions" in line 90: suggest rewording to "conditions when ice is present"
- Figure 1: please darken the country borders in the overview, small inset map, to make them easier to see. In addition, please add more information to the caption for example explain that Szczecin and Police are towns. Please explain the difference between observation points and transects in the figure caption.
- Line 153 and throughout: please be consistent with terminology, using only one of "count type", "method", "platform" throughout manuscript
- LIne 168: please explain why the species in fig 2 were separated into the various panels, referring if necessary to the fuller explanation that should be added to introduction or methods.
- Lines 205-209: this paragraph is interpretation and belongs in the discussion.
- Throughout: please use scientific names only, the mixing of scientific and common names is confusing for a non-expert.

Reviewer 3 ·

Basic reporting

no comment

Experimental design

no comment

Validity of the findings

At present, I am not convinced that the analysis correctly accounts for the paired nature of the counts (i.e. ground- and aerial-based counts were done on the same day and should return the same number of waterfowl). By pooling all counts for each method into a single estimate, the analysis potentially overlooks differences observed within individual count dates. Adding count date as a random effect in your models might be one way to account for this. Alternatively, calculating the difference between the two counts on each particular date and then running a t-test to determine whether the difference changes between ice-free and ice conditions would be another way to account for this.

These surveys relate to only a single waterbody. Although likely to be generalizable to other locations, there is little in the way of discussion about this potential limitation of the study.

Additional comments

This manuscript represents research that will likely to be of interest to many biologists monitoring waterfowl at higher latitudes. It seeks to identify differences in the numbers of waterfowl recorded during aerial- and ground-based counts conducted of the same study area under different ice conditions. They find that although ground-based counts generally return higher counts of waterfowl in the absence of ice, aerial counts produce higher counts when the waterbody has high ice coverage. The authors identify species-specific differences from these general patterns, suggesting which species are most suitable for counting using the two methods. They also provide costings for the two methods, giving readers an understanding of how cost-effective each method may be under various scenarios.
The manuscript was very clearly written and clearly presents the work and their findings. I believe that the authors have adequately grounded their research in the literature on this topic and found that the introduction was well composed. Aside from some minor comments, mostly related to softening how definitive the authors are in the interpretation of their results, I believe that this manuscript will make a worthwhile contribution to the field.
General comments
• Throughout the manuscript, accuracy is implicitly assumed to coincide with the count that returns the highest number of individuals. No discussion is made of the potential for double counting of some individuals to inflate the number of waterfowl recorded. If double counting is particularly high for one method in general, or during certain ice conditions, then there is the potential that the results do not conform to the interpretation of the authors. At the very least, some discussion of the likelihood of double counting for each technique needs to be made. For instance, it would be good to know whether waterfowl remained on the water surface following overflight of the survey aircraft, or if they flushed to a neighbouring area where they could have potentially been counted again on a successive pass. Similarly, how did the ground counters monitor bird movements when they were walking between successive count stations?
• The authors state definitively that the two methods can be used interchangeably during ice-free conditions because the magnitude of the difference between counts of the two methods is small. However, their results indicate that the total difference is approximately 10%. If a monitoring program is intending to document population change, 10% noise in the data would drastically reduce statistical power to detect a population decline (or increase). At the very least, no decline smaller than 10% could be detected with any confidence without a correction factor being introduced. I would very much like to see the authors rephrase their assertion to something like ‘the observed difference (~10%) between methods may be acceptable in some situations for the two methods to be used interchangeably. However, researchers must consider the influence these small difference between methods might have on their results and consider developing correction factors where these differences are deemed unacceptable.’
Specific comments
L151-152 The ‘1 – over 70%’ presentation makes it confusing as to whether you were categorising 1% ice cover to more than 70% ice cover as ice conditions. I suggest using ‘1. Over 70% of surface covered by ice’ instead of the dash.
L173 ‘greater efficiency of species identification using this method.’ Could it not be that other species are mis-identified as this species rather than you are better at identifying it from the air. Seems unlikely that an observer travelling at 100 km/h could identify it more easily than someone with a scope.
L189 To state that the species are similar in their behaviour seems to be an oversimplification here. Maybe you could state that they are similar in their foraging method (for which there are studies that could be cited) because there is potential that behavioural differences are still responsible for this observation? For instance they may show different responses (flush vs dive) to overflight of the survey aircraft. Clearly there is something different about their appearance or behaviour that is driving these disparities and to pose that they are similar in their behaviour, I believe, is a misrepresentation.
L197 Please insert ‘ground’ in front of ‘counting’ to make it clear that you are reimbursing only the ground counters not those in the aircraft.
L199-201 I am not sure that it is accurate to state that the cost of the aerial survey is 212 € per 100 km of coastline because the survey plane would not have to make transect flights out over the waterbody. Therefore, total flight time would be reduced if biologists were interested in only counting a linear strip along a coastline.
L206 ‘Methodologically justified technique’ Please explain what you mean by this. Are you claiming that they are more repeatable, are more accurate, are safer (which they are not)?
L208 ‘Only one disadvantage of an aerial count’ Wrong. I can think of many. They are more dangerous (http://profile.iiaa.org/sites/default/files/images/Obits/Sasse-2000-WSB.pdf), surveys may need to be conducted in places where there are airspace restrictions, difficulties in identifying taxa with similar appearance, high levels of disturbance to the wildlife in some situations. These points need to be briefly mentioned in the discussion.
L212 ‘ground counts gave better results than aerial ones.’ You do not know this for certain. You have based your assessment of accuracy on size of the count. As you do not know the true number of individuals in the population, you cannot state that they are better. You can say ‘ground counts gave higher numbers relative to aerial ones.’ As per my general comment, any reference to the accuracy of the count needs to be removed from the discussion.
L216-217 ‘does not really matter which censusing method is used during ice-free conditions as the counts are not specially affected by this.’ This statement is far too flippant. See my general comments as to why a 10% difference is likely unacceptable in many situations.
L232 ‘real number of birds’ you cannot claim to know the real number.
L224 ‘underestimated’ again, you do not know this. Both counts could have overestimated the true number if double counting was substantial in both methods.
L226-227 comparing the results here to inaccessible lakes seems unjustified. It is not until the next paragraph that the reader gets a sense that under ice conditions patches of open water are inaccessible to the ground counters. Your methods state that the survey was designed to effectively cover the entire survey area, so the reader is led to believe that no areas are inaccessible. You may need to introduce the concept that some areas of the waterbody are indeed inaccessible to ground counters prior to discussing the similarities of your study with those where accessibility has previously proved difficult.
L263 ‘underestimate’ replace with ‘low relative to ground counts’.
Figure 2. I would like to see error bars on plots a-c
Figure 2. The numbers reported in plots a and b exceed the total number of birds reported in plot d (and table 1). I suspect this is an error due to a missing decimal point?

All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.