Review History


All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.

Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.

View examples of open peer review.

Summary

  • The initial submission of this article was received on February 8th, 2018 and was peer-reviewed by 3 reviewers and the Academic Editor.
  • The Academic Editor made their initial decision on March 19th, 2018.
  • The first revision was submitted on June 1st, 2018 and was reviewed by 2 reviewers and the Academic Editor.
  • A further revision was submitted on July 10th, 2018 and was reviewed by 3 reviewers and the Academic Editor.
  • A further revision was submitted on September 6th, 2018 and was reviewed by the Academic Editor.
  • The article was Accepted by the Academic Editor on September 18th, 2018.

Version 0.4 (accepted)

· Sep 18, 2018 · Academic Editor

Accept

Thank you for addressing the reviewers' comments.

# PeerJ Staff Note - this decision was reviewed and approved by David Roberts, a PeerJ Section Editor covering this Section #

Version 0.3

· Aug 21, 2018 · Academic Editor

Minor Revisions

The reviewers are mostly satisfied with your amendments, with just some minor clarifications pending. Once these are addressed, we will consult only that specific Reviewer before formally accepting your manuscript for publication.

Of note, I understand that dealing with Reviewers' comments can be a frustrating experience for many authors. However, I can assure you that the vast majority of reviewers are well-intentioned, voluntarily spending their time on this work. This is clearly the case for those who reviewed your manuscript. Thus, please bear this in mind when preparing your rebuttals, as an aggressive tone is not helpful for anyone. But to finalize these comments on a lighter note I have attached an excellent light-hearted editorial on referees that may interest you.

·

Basic reporting

I have nothing more to add from my first two reviews.

Experimental design

I have nothing more to add from my first two reviews.

Validity of the findings

The findings could be true, but the high uncertainty and potential biases warrant cautious interpretation. In my opinion, the paper presents an intriguing result with too much confidence that the result is true.

Additional comments

Probably not a great idea to insult the reviewer. I was only trying to help.

Reviewer 3 ·

Basic reporting

Fine

Experimental design

Fine

Validity of the findings

Fine

Reviewer 4 ·

Basic reporting

Line 246 terminology should be kept consistent, so in this case I would suggest “cryptic poaching” instead of “cryptic killing”

Experimental design

Past studies, briefly mentioned in the introduction, have attempted to estimate unreported mortality of large carnivores. How does this study differ from those? Why does the method being used by these authors better apply to grizzlies, or at the very least build on the work already conducted? The introduction currently reads as though the work previously done has very little application to unreported grizzly mortalities. I struggle to belief that is true, but if it is, I would love to know why. There is a mention of this in the discussion but I think it would improve the rigor of the manuscript and the introduction to include it more in the introduction.

Validity of the findings

Line 245, why do the authors assume that bears had been shot and their collars destroyed in this case?

I think the authors have done a good job of addressing the concerns of previous reviewers regarding the assumptions made.

Version 0.2

· Jun 18, 2018 · Academic Editor

Major Revisions

The two reviewers have once again raised a number of issues with the your study, a number of which have not been adequately addressed in your revision. I therefore invite you to carefully consider them comments and address them appropriately, including taking a cautious approach in regards to the reliability of the conclusions that can be drawn from your data.

·

Basic reporting

The paper has been cleaned up and written more clearly. More could be done to improve the writing, however. Many superfluous phrases could be cut or simplified.

Experimental design

There is no experimental design, which is typical of studies such as this one. However, without experimental design elements, there is good reason to interpret the result cautiously. See below for more on study design issues.

Validity of the findings

The study’s approach is really very simple, using algebra to solve for the expected number of non-hunter killed bears in a government data base to point out that the expected number is much larger than the observed number. The fundamental weakness of the paper, which I tried to point out in my first review, is its zero degrees of freedom when comparing the observed value to the expected value.

As I suggested in my comments on the first draft, the telemetered bears should be seen as the training data set on uncollared bears, although it is questionable whether the training data set is truly representative of the uncollared bear population. The new draft of the paper is an improvement over the earlier draft by pointing out that telemetered bears might be unrepresentative of uncaptured bears, but then some of this improvement is lost when the paper dismisses the likelihood of the captured-bear bias on the unconvincing grounds that three capture methods were used. In my earlier review I also pointed out that had hunters seen the collars and decided not to shoot only two collared bears, and had those bears died by human agents other than hunters, the shift in the expected number of uncollared non-hunter killed bears would have been from 64 to 41, a 36% reduction. The new draft introduced yet another source of potential bias by revealing that collars were designed to disengage from bears within <1 year to 6 years (this potential bias should have occurred to me during my reading of the earlier draft). Whereas bears in the training data set could have ended up in the mortality data set during the time they were collared, uncollared bears could have ended up in the data set during their entire lifespans until the study period was curtailed. I am not sure how significant this potential bear-year bias might have been. Another relatively small bias might have influenced the study finding had any of the collared bears dispersing out of the study area might have returned without their collars. By pointing out all these potential biases, I don’t mean to argue that the study result is wrong; my point is that ample reason exists to interpret the result cautiously.

I noticed that my notation in my first review did not transfer from my Word document to the PeerJ reviewer form, so I’ll change it slightly to facilitate transfer. Consider the following equation, which I prefer over the equation in the manuscript because my version isolates the term the authors wish to estimate:

F’ = F/T,

where F’ is the estimated number of unreported fatalities of uncollared bears caused by non-hunting humans, F is the reported number of fatalities of uncollared bears caused by non-hunting human agents, and T is the training data expressed as the ratio of reported collared hunter-killed bears to collared non-hunter-killed bears, or in this case, and combining genders, it is 71/(10/9) = 64. This estimate of 64 is the expected number of uncollared non-hunter killed bears in the data base, and is larger than the observed number of 10. This is really the message of the paper – that 54 (84%) bear fatalities caused by humans other than hunters are missing from the data base. However, this outcome is weakened by the comparison of one observed value to one expected value, involving 0 degrees of freedom. The outcome is a comparison of two numbers that cannot be compared in a statistical hypothesis test.

I suggest the study’s finding warrants a letter-length paper pointing out the magnitude of difference between the observed and expected numbers of non-hunter killed bears as an intriguing indication that non-hunter killed bears might be under-reported. Even between drafts of this paper, 1 bear got added to the uncollared, non-hunter killed bear category, thus indicating the vulnerability of bear fates to investigator assumptions and interpretation. Decisions over which bear goes into which category can substantially change observed and expected values because the numbers in the training data are small. And then there is the issue I raised in my review of the earlier draft that experimental design elements are lacking from this study, which is another reason to interpret the finding with caution.

Breaking down the analysis by gender further weakens the analysis by diminishing the numbers serving as training data. The numbers of bears in the training data set are too small by gender to support calculations of expected values from ratio metrics.

Finally, I think the paper is improved with the inclusion of more detail about the fates of telemetered bears. I would even encourage more detail. I suggest that the strength of the paper is in the story it tells about human-caused bear mortality, of which the estimated expected number of unreported human-caused bear mortality is a part.

Manuscript Line

119 “Based on these factors…” is too vague an explanation for how it was determined whether bears were killed via cryptic poaching.

148 I could not understand how PopTools was used to generate confidence intervals. Was PopTools applied to the training data? Or also to the counts of hunter-killed and non-hunter killed bears in the data base? It seems to me the only appropriate application of randomized subsampling would be to the training data.

199 How was a confidence interval derived from a count within one sampling unit? Counting my family of 4 within my single household sampling unit, I get 4 and not 2 to 6. A count is not an estimate.

Additional comments

Replies to responses by cited comment number

1 Regarding the definition of mortality, I see that the first definition of a Google search on ‘mortality’ was used. I know this because I got the same definition from Google based on this query. I suggest scrolling down the Google output to a reliable dictionary such as Merriam-Webster, where you would see that mortality has different meanings than fatality, and that ‘mortality rate’ is redundant.
9 I see that my suggestion to not report gender differences was followed, but that doing so was deemed unwise. If unwise, then perhaps it would have been best to decline to follow my suggestion. This said, I explained why the division of results by gender differences was distracting and unhelpful to the paper. The numbers are too small, and so are the differences. But that is my opinion. Another approach would be to demonstrate that the gender differences are of sufficient magnitude to justify reporting fatalities by gender.
The paper continues to report results by gender, but without suggesting differences in fatality rates between genders. This work-around to my suggested revision solved part of the problem, but still burdens the paper with too many findings. The results are difficult to follow. And I continue to wonder why it is so important to break out the results by gender, except that by doing so the paper implies gender differences. Doesn’t work for me.
18 Hunters love to talk? I’ve known many hunters. Some love to talk. Some do not. Some say what they want the regulators to believe. The stated assumption about hunter behavior appears speculative, rather than based on any evidence.

Reviewer 3 ·

Basic reporting

I can relate to being defensive in responding to a critical review. Nonetheless, I have to chuckle at the irony of the authors response to my review. They assert that I totally misunderstood their manuscript, and I respond by asserting that they totally misunderstood my review. Looking back at my original review, I can sort of see why the authors concluded that I did not understand their analysis, but I assure you I did. Let’s start fresh.

(1) The revision includes a formula, but, may I suggest that they change it to the one that Reviewer #1 suggested and include the citation? I recognize the authors arrived at it independently, but the earlier use of the method is still worth acknowledging. Reviewer #1 said, “The essential form of the adjustment, F ̂=F/D , has its origin in Horvitz and Thompson (1952), but multiple revisions or additions have since been made to their estimator to suit specific challenges (Korner-Nievergelt et al. 2011).” I think this form of the equation is simpler and more intuitive. It also works very well with the “ratio” language used in the text, such as “The ratio between the number of permitted hunter kills to the number of human-caused mortalities but for other reasons is much different for the uncollared bears in the CI database than it was for the collared sample.” The value D in the formula above is exactly that ratio.

(2) Line 63-65 says “The difference in the ratio of bears killed by permitted hunters to bears killed by people for other reasons between the uncollared bears in the government database and the radiocollared sample provides an estimate of the number of unrecorded human-caused mortalities of uncollared bears.” I get this and agree, certainly when comparing these ratios within the same set of years in the same area. Please take a minute and just consider these questions: (a) For management purposes, are there any cautions for extrapolating this estimate to other areas or to other years in the same area? (b) Is there an effect from having a limited number of hunting permits, which may fix hunting mortality to a specific number (and potentially affect that ratio of hunting to non-hunting mortality)?

(3) I accept the argument for not including any natural mortality. Given that natural mortalities are of no interest, I suggest you remove them from Table 1. I also suggest the following format for the table (picky, I know):
Hunting Non-hunting
Collared Male 6 4 (2)
Female 4 5(1)
Total 10 9

Uncollared Male 45 8
Female 26 2
Total 71 10

(4) The discussion begins with a brief review of previous estimates of the proportion of non-reported deaths among non-hunting human-caused deaths in BC (53%, 62%), which were based on samples partly obtained within the same study area and the time frames overlap significantly. This is followed with the estimate from the current study and a similar study further north in BC (84–88%, 90%). The differences are notable, but the authors do not support or advocate for either estimate as better, more realistic, or less biased. A reader might conclude that surely the authors believe this current estimate is better, hence the publication of a new estimate, but are we correct? Which should a wildlife manager use for management purposes? It may not be obvious, but much of my original review was centered in this very question. I was encouraging the authors to provide more context for the need for another estimate, and some discussion of the pros and cons of the old versus new approach. Interestingly, it was obvious from the author responses to my review that they are skeptical of the previous estimate and made several references to unrealistic assumptions. Is this not important to discuss? For example:

(a) THEY [Cherry et al.] HAD TO MAKE TWO ASSUMPTIONS THAT WERE CLEARLY INCORRECT.
(b) I THINK THIS REVIEWER MISSED THE METHOD WE USED AND ASSUMES THAT WE ARE ESTIMATING THE REPORTING RATE FOR COLLARED BEARS AS I DID 20 YEARS AGO (MCLELLAN ET AL. 1999) AND AS DID CHERRY ET AL. (2002). WE DO NOT USE THIS METHOD AS IT MAKES ASSUMPTIONS THAT WE KNOW ARE INCORRECT.
(c) My original review and response comments: Are they, perhaps, suggesting that the estimated reporting rates obtained from a radio-marked sample are biased high?” NO, THIS IS NOT ASSUMED. If so, then I think this point might be made more strongly. In fact, perhaps this is where this manuscript needs to go, to make it more useful for biologists working in other areas.

So, all told the authors seem to be saying…Estimating reporting rate from a collared sample has 2 ASSUMPTIONS THAT ARE CLEARLY INCORRECT, it results in non-reporting rates which are lower than the non-reporting rate estimated in this study, but IT IS NOT ASSUMED that it is biased? Please explain.

(5) Despite the views of the authors, I was completely aware that the current analysis was not utilizing information about how many of non-hunting, human-caused, radio-collared deaths were reported in their analysis. I was simply asking for them to report this number or proportion. Does it support your estimate or not? If not, please provide some explanation.

(6) I agree with the decision to combine sexes and eliminate the discussion about any differences.

Experimental design

no other comments

Validity of the findings

no other comments

Additional comments

no other comments

Version 0.1 (original submission)

· Mar 19, 2018 · Academic Editor

Major Revisions

The reviewers have all provided very thorough feedback on your manuscript. While they find your study of interest, a number of key concerns have been raised that would need to be responded to and addressed by major revisions.

·

Basic reporting

The reporting would benefit from some changes. The terminology could use some work. For example, ‘mortality’ is often used where ‘fatality’ would be more appropriate, as fatality refers to a death event whereas mortality refers to a rate, e.g., deaths per 100,000. The term ‘mortality rate’ is redundant.

‘The number of bears killed by people for all other reasons [than hunting]’ might be more concisely represented by ‘the number of non-hunter-killed bears.’ Basically the study is about bears killed either by hunters or non-hunters.

Specific editing comments

Line 3 I suggest breaking the first sentence into two sentences, the first ending with conservation. I suggest merging the next sentence by replacing the period with a comma and the ‘however’ with a ‘but’.

Line 8 Replace ‘for’ with ‘including’.

Line 9 Delete the first ‘for’.

Line 15 Add comma after ‘office’.

Line 20 Delete hanging parenthesis.

Line 39 The reasons for non-hunter fatalities ought to be summarized in the Introduction.

Line 114 Table 1 identifies the number of female bears as 37. However, I suggest skipping all of the analysis of gender differences in reporting of bear fatalities.

Line 118 Here is a good example of why it would help to include a table summarizing the circumstances of fatalities assigned to the non-hunter category. This bear’s collar was attached to bottles and tossed into the river, probably bringing some laughs to whoever did it. But how does this act of research vandalism, by itself, support the determination that the bear was killed by the non-hunting public? What if the vandal(s) found the bear dead already, either killed by a hunter or by natural causes? Left as is, the assignment of the bear to the non-hunter fatality category seems like a leap to guilt. There must be an additional reason for the category assignment.

Line 122 This entire paragraph and much of the next paragraph can be deleted without loss to the main result of the study.

Line 141 This paragraph is discussion material, so belongs in the Discussion section. I also suggest deleting all discussion about sex differences in reporting rates.

Line 164 Yes, the sample size is small. For this reason, I suggest revising the following clause by adding a statement of uncertainty such as ‘might indicate.’

Line 169 Replace included with include.

Line 174 The second reason seems the same as the first.

Line 175 The sentence beginning with ‘In addition’ lost me. I suggest rewriting it.

Line 179 Is there any basis for this speculation about more male bears being killed in hunter camps?

Line 192 Why would a hunter report not shooting a bear because it was collared?

Line 194 This paragraph includes multiple conclusions that are unfounded or over-confident. Some of the statements could use citations, and some could use more caution.

Line 204 The citation does not appear in the list of references.

Line 207 I suggest deleting the sentence on the time period beginning in 1995.

Line 208 I could not understand the sentence about managers and researchers knowing something by far…

Line 211 ‘…nature of human-caused mortalities…’ seems vague. It would help to be more specific.

Check the referencing. McLellan 1998 is cited on line 21, but is not listed in references. The same for Servheen. Check all of them.

Experimental design

Grizzly bears participating in a telemetry study that was begun in 1979 are used in this paper as a training data set to obtain what are assumed to be the true proportions of bears killed by hunters versus non-hunters. The ratio of hunter to non-hunter fatalities in the training data is then used to adjust an agency data base of bear fatalities for the proportion of non-hunter deaths that were unreported. This is an approach often used to adjust fatality estimates for the proportion of fatalities not found during fatality monitoring where human activities caused wildlife deaths. The number of detected fatalities F is divided by a fatality detection rate D informed by trials imposed on the searchers who, ideally, are blind to the trials. The essential form of the adjustment, F ̂=F/D , has its origin in Horvitz and Thompson (1952), but multiple revisions or additions have since been made to their estimator to suit specific challenges (Korner-Nievergelt et al. 2011). Applied to the bear study, F is the number of bears reported killed by hunters, D is the ratio of hunter to non-hunter killed bears in the training data, i.e., the telemetered bears, and F ̂ is the estimated number of bears killed by humans who were not hunting.

Before continuing, I should interject three points. One is that the paper needs to simplify the analysis by eliminating comparisons of outcomes by gender. The case for gender differences in reported fatality rates is weak, where percentage differences really represent small numerical differences in bears of one gender or the other. Second, whether to include suspected causes of death should be decided, and either the results should include or exclude fatalities of suspected cause. I suggest using only bears of known cause of death, but I doubt that it makes much difference either way. Third, the study results should include only one time period, which is the period over which bears were captured for the telemetry study. Breaking out results over a more recent time period is justified by an assumed greater reporting accuracy in recent years, but no evidence is provided in support of this assumption. Why would reporting be less accurate during 1980-1995 than during 1995-2015? Using the more recent time period yields a greater adjustment to the number of bears killed by non-hunting humans, but at the cost of relying on an even smaller sample size. I suggest using only the study period 1980-2015. In summary, the paper would be stronger by restricting its focus to those bears reported as fatalities caused by hunters and non-hunters and with and without telemetry units, i.e., only four numbers. These four numbers would be 10 collared bears killed by hunters, 9 collared bears killed by non-hunters, 70 uncollared bears killed by hunters, and 10 uncollared bears killed by non-hunters.

The Horvitz-Thompson estimator, or any of its analytical descendants, can be highly sensitive to the effects of bias and error. A relatively small change in D can greatly affect the adjusted number of fatalities. If only one fatality was found or reported, whereas 50% of the training data were found or reported, then 1 ÷ 0.5 yields 2. But if D = 0.1, then the adjusted fatalities increases to 10 (1 ÷ 0.1), and if D = 0.01, then the adjusted fatalities increases to 100 (1 ÷ 0.01). The outcome of the fatality reporting of telemetered bears makes a huge difference to the estimated number of bears killed by non-hunters. Therefore, the training data set used to derive D must be reliable.

The reliability of the training data in the bear study bears scrutiny for several reasons. D is a ratio unaccompanied by an error term. The ratio that was reported for the training data was 1.11 (10 ÷ 9), which would yield an adjusted number of non-hunter fatalities of 63. However, D was based on a small sample size, meaning that a shift of one fatality from the hunter to non-hunter category, or vice versa, would change D to either 0.90 or 1.375 depending on the direction of the shift. Applying these outcomes to 70 reported hunter deaths, the adjusted non-hunter fatalities could be either 51 or 78 bears killed by non-hunting humans, or 19% lower or 25% higher than the reported number of 63. A shift of only 2 fatalities to the other category would change D to 0.73 or 1.71 depending on the direction of the shift. Applying these outcomes to 70 reported hunter deaths, the adjusted non-hunter fatalities could be either 41 or 96 bears killed by non-hunting humans, or 35% lower or 52% higher than the reported estimate of 63. The paper should include an assessment of uncertainty of the study result due to small sample size.

References Cited

Horvitz, D. G. and D. J. Thompson. 1952. A generalization of sampling without replacement from a finite universe. Journal of American Statistical Association 47:663–685.

Hurlbert S. H. 1984. Pseudoreplication and the design of ecological field experiments. Ecological Monographs 54:187-211.

Korner-Nievergelt, F., P. Korner-Nievergelt, O. Behr, I. Niermann, R. Brinkmann, and B. Hellriegel. 2011. A new method to determine bird and bat fatality at wind energy turbines from carcass searches. Wildlife Biology 17:350–363.

Validity of the findings

Another reason to scrutinize the reliability of the training data goes to the assumption that the ratio of hunter to non-hunter fatalities of telemetered bears represents the same ratio applied to non-telemetered bears. The paper implies that this assumption is valid because the telemetry units were small and difficult to see from a distance (line 107). However, this assumption could be flawed for two reasons that were not discussed in the paper. First, hunters using scoped rifles were likely to notice the telemetry units. If only one bear was spared by a hunter who noticed the telemetry, and had that bear later died as a result of an encounter with a non-hunter (train, car, or self-defense), then the adjusted fatalities due to non-hunting human causes would have shifted from 51 to 63. Had two hunters made this decision and both bears later died due to non-hunting human causes, the shift would have been from 41 to 63. Secondly, the assumption might be flawed if telemetered bears shift their behaviors as a result of capture and handling. Alternatively, telemetered bears might have been captured because their behaviors differed from non-captured bears, making them easier to capture. For fatality rates of telemetered bears to represent those of non-telemetered bears, the telemetered bears would have had to have been a random sample from the bear population and their behavior unaltered as a result of capture and handling.

Lack of experimental design adds another reason for interpreting the result cautiously. Even a mensurative study can benefit from the basic tenets of experimental design such as replication and interspersion of treatments, use of a control treatment, and appropriate spatial and temporal scales (Hurlbert 1984). As a wildlife ecologist I understand that studies of species such as grizzly bear rarely allow for the implementation of experimental design tenets. In the case of this study, the treatments of hunters and non-hunter killers of bears were mixed but not interspersed in the same study area, and there was no replication and no control treatment (sizable areas with no hunting allowed). Although experimental design tenets are understandably absent or at best weak due to the nature of the animal, the study result should be interpreted cautiously. I suggest that the paper present the under-reporting of non-hunter-killed bears as a possibility – a possibility that warrants focused research on the question.

Even in raising the possibility of under-reporting of non-hunter killed bears, I suggest that it would help to add more details about the bears reported killed this way. The paper lists the types of causes of death other than hunting, but because the under-reporting of non-hunter deaths is central to the paper, I suggest adding a table that summarizes the circumstances associated with each bear assigned to this category. How many were killed in self-defense? How many were killed by cars? It would help to report whether any of the bears assigned to the non-hunter fatality category carried bullets or showed other evidence of wounding by hunters. Confounding factors affecting only one or two bears in this study can greatly change the study’s outcome.

Additional comments

I appreciate the opportunity to review this paper, which presents an interesting and important possibility that grizzly bear fatalities caused by non-hunting humans is under-reported. I believe that the essential conclusion of the paper is likely correct, but I also believe that the evidence leading to the conclusion is not yet entirely convincing. The approach of using a training data set of grizzly bear fatalities to inform the estimate of non-hunter killed bears in the larger population has real potential, and raises interesting questions and challenges related to ensuring the estimate is accurate.

Reviewer 2 ·

Basic reporting

Abstract "Thus about 12% of the human-caused but non-hunter kills were reported. Between 1995 and 2015, the rate was 21%, suggesting an improvement in reporting rates" These 2 statements are both unclear.
"The study area may have low reporting rates because it is >40 km of gravel road from a Conservation Officer office so reporting is difficult and there are no residences so there is little concern of a neighbor contacting an officer." Minor issue but begs the question of whether reports can be called in or mailed in later or something.
"Across British Columbia (BC), McLellan et al (2017) found the death of almost twice as many males as females of each cohort ended up reported."
This statement is not clear enough. As with the abstract, the authors should clarify . Also this sounds a bit like they are publishing the same data again?
The literature review in the Introduction is inadequate (also see below for assumption made without considering the counter-evidence). Given the authors' goal is to understand unreported mortalities and the disappearances of female brown bears, I would have expected a thorough review of recent literature addressing cryptic poaching (Liberg), undocumented mortalities (Treves), differential risk (Borg, Adams, Schmidt) in wolves, but none of this work was cited. Indeed the introduction reads as if the authors are almost the only ones studying this issue. This is a weakness that rises up again in their assumption ()see below) and surfaces in the abstract (see above), and may undermine the methods.

Experimental design

"Assuming the rates and causes of death of collared bears are the same as uncollared bears, as has been done to estimate survival rates and population trends,…" Despite providing 7 citations, I am troubled by this assumption and the authors should be also. The assumption was shown to be unreliable for Alaskan wolves and for wolves in the lower 48 (citations above). The reasons for unreliability of the assumption is that collaring did not happen randomly but some individuals were more likely to be collared. Furthermore it appeared that in some cases collard and uncollated animals faced differential mortality risks and rates. I am concerned the authors are ignoring a rather substantial literature beyond grizzlies that has shed light on the central problem they are investigating. Also see above.
The descriptions of hunters and recreationalists in the study area deserve a citation to evidence.
I think it would be clearer throughout if the authors use "permitted" or "licensed" for the legal killing of bears by a permitted hunter rather than "hunted" which was ambiguous in several places as it could be interpreted as a hunter with a permit for ungulates shot a bear illegally. Line 124 is a perfect example of the potential confusion.
"When collared bears were killed for other reasons, they too were often reported. If they were not, then we reported the deaths to the Conservation Officers." This needs clarification. Who is the subject doing the reporting in the first sentence? "often" is a value judgment unless data are presented. How did the authors know if a collared bear's death had not been reported? I can guess but slightly too much left to the imagination of the reader here. Then this sentence either answers my concern with 'often' or muddies the water: "Most collared bears (95%) that were known to have been killed by people for any reason are in the CI database." Does this 95% include those legally hunted and the legal government control? If so it is misleading coming on the heels of sentences pertaining to other causes. I would separate such reporting (truly mandatory) from the other types of mortality that are not so clearly mandatory, i.e., would a driver know to report a collision with a collared bear? And by the way these look like results so I suggest reorganizing.
Lines 103-110: Several problems here. First the authors gloss over the possibility (probability) of bears with non-functional collars out there. What did they do with those? Recent work cited above addresses this issue as a bias to precisely the type of analysis the authors are conducting here, so it cannot be ignored. The second issue is the assumption. While it is excellent practice that the authors are very clear and twice identify their starting assumptions, it is not so great that they do not propose sensitivity analyses for both components (risk and rate) of those assumptions. By the end it felt like they did not care if the assumptions were valid.
Lines 114-121 are almost clear. Because the supplementary data contain 102 dead bears if I am understanding the dataset (better labeling would help), then you might want to end by clearly stating what was included or excluded.. Please clarify. Also the one case of a collar strapped to a bottle and thrown in water being classified as human-caused makes sense but can't it be classified a bit more precisely? After all it wasn't a vehicle collision right?

Validity of the findings

I am deeply concerned that the entire validity of the findings hinges on the 2 starting assumptions that collared and uncollated bears die at the same risk and rate. I cannot encourage publication without quantitative evaluation of these assumptions. And please note both assumptions are important and may be partially correlated only not perfectly correlated. For example, if collared bears face a higher rate of mortality than uncollared bears for whatever reason but the risk patterns are the same, then perhaps the author's estimates are somewhat biased upwards or downwards (a question of accuracy). But if the risks faced by uncollared and collared bears differ then either the precision of their estimates would be off or their inferences might be erroneous.
Line 123-128: note recent literature from 2017 addresses deaths of unknown causes. One of the clearest conclusions coming out of that work is that unknown causes are NOT legal hunting or legal government killing, therefore the calculations in these lines MUST include the unknown causes because the authors are explicitly calculating a ratio of legal to other causes. There is no scientific rationale for excluding them as was once done under the erroneous assumption that unknown deaths are estimable from known deaths.
Line 136: "Assuming the collared sample is the true ratio,…" The authors missed the conclusions of the wolf research cited above. Neither sample is "true", both are biased but in complementary ways in some cases. 

Although I find their basic conclusion is probably correct that the vast majority (79-88%) of uncollared bear deaths were not in the governmental database, I worry about the accuracy of their calculations for the reasons stated above. It's very difficult to evaluate if the sex difference they claim to find is real or an artefact of their assumption and handling of collared bear data.

Additional comments

"Furthermore, the death of female bears may be more commonly unreported than males (Ciarniello et al. 2009, McLellan 2015), which could have important implications for population trajectories." 
While everyone of your readers understands that females give birth so directly affect population dynamics, this statement is misleading. It implies males are unimportant to population trajectories which the authors know is inaccurate. Expose implicit assumptions as such or rephrase to accord male brown bears their role in population dynamics (e.g. infanticide., etc.).

Reviewer 3 ·

Basic reporting

The writing, structure, and information provided in this manuscript is appropriate for a scientific article, and the raw data are supplied.

Experimental design

The subject matter is of interest to a narrow field of scientists. The results are quite specific to the study area in southern BC, and to BC in general. The analysis/math involved is quite rudimentary, so therefore I would expect a more rigorous review of previous literature and a greater attempt to put this work into a larger context and show how it adds to previous work. I am not convinced it fills an identified knowledge gap as currently presented.

Validity of the findings

The challenge of estimating unreported mortality of grizzly bears is daunting. Although information from the deaths of radio-marked bears is the most appropriate data for helping to estimate unreported mortality, sample sizes are generally quite small given the relatively high survival of adult bears and the resulting low number of observed deaths. Therefore, even with the most representative data, inference may be subject to sampling variance. Still, Cherry et al. (2002) proposed a method for estimating unreported mortality based on a Bayesian analysis using a prior distribution informed by the reporting rate (ratio of reported to unreported) for grizzly bear deaths documented among sample of radio-marked bears. This manuscript uses a similar approach (although non-Bayesian), but there are significant differences in the assumptions, and for that reason I have some concerns.

The analysis presented is logical and clearly serves to demonstrate that human-caused mortality (other than hunting) is under-reported. If the intent of the manuscript is primarily to make this point, then it should be titled like “Reporting rate low for non-hunting human-caused mortality of grizzly bear in the Flathead Valley, BC Canada”. However, the current title suggests that the intent of the manuscript is not just to show that reporting rate is low, but to estimate (or perhaps predict) it. I would presume this estimate might then be used in other population analyses. This is where I have some concerns.

First, unlike Cherry et al., the current authors do not actually utilize information about an observed reporting rate to estimate the true reporting rate. On line 99, they state “when collared bears were killed for other reasons [than hunting], they too were often reported.” How often? Table 1 indicates that there were 12 non-hunting mortalities and 5 natural mortalities that were observed among radio-marked bears. Given the sample size, the observed reporting rate may have been as low as 6% (1 of 17), as high as 94% (16 of 17), or in between. By sex, it could have ranged from 10% to 90% (n = 10) for females and 14% to 86% (n = 7) for males. These ranges certainly include the estimated 12% reporting rate obtained in their analysis, so at the very least, I think this observed reporting rate should be provided so that a reader can compare. If the two estimated reporting rates are similar, it would add support to the validity of the estimate. If not, it might bring the estimate into question.

Cherry et al. attempted to explicitly estimate the reporting rate for causes of death which were known or assumed to have <100% reporting. As such, they specifically excluded agency removals from analysis, because they were assumed to be an accurate count. Again, they were utilizing the ratio of reported to unreported within a small sample to extrapolate to the population. It is a subtle difference, but this is not what the current authors did. They argue that “like all bears killed legally by hunters, all collared bears that hunters killed were reported.” In other words, hunter kills have a 100% reporting rate. So, in essence, this current analysis assumes some quantifiable ratio between the number of bears that died from a cause of death with 100% reporting to the number of bears that died from causes of death with <100% reporting. I would argue this is a more tenuous assumption. On the one hand, I can see the rationale. One could argue that all causes of death are most correlated with population size, so that if hunting mortality goes up, non-hunting mortality also goes up. On the other hand, though, there is the argument that hunting mortality may be inversely correlated with some sources of mortality, like poaching.

Examining the difference in reporting rate between sexes is also a major them in the manuscript. This appears motivated by the McLellan (2017) paper, which states: “We also found the death of almost twice as many males as females were reported. There are 3 reasons why fewer female deaths were reported than males: an unequal sex ratio at birth, reporting errors (Schliebe et al. 1999), or females have higher natural and unreported human-caused mortality rate…Although sample sizes were small, 2 of 2 deaths of female bears monitored in the mountains of central British Columbia by Ciarniello et al. (2009) and 3 of 3 female bears in the mountains of southwestern British Columbia (McLellan and McLellan 2015) died of natural causes. It is also possible that, because of their protective behavior when with cubs, more females than males are killed by ungulate hunters and not reported (Mace et al. 2012, McLellan 2015).” So, after arguing that natural mortalities (which we can certainly assume have <100% reporting) may partially account for the “missing” female mortalities in the CI database, the authors select to exclude natural mortalities from this current analysis and focus entirely on human-caused mortalities. Why? This seems like an odd choice, especially given that female natural mortalities were more numerous than male natural mortalities within the radio-marked sample (4 versus 1 in Table 1). Of course, if natural mortalities are included in the analysis, then the estimated reporting rates would be even lower.

So what is the proposed application of this analysis? Is it relevant to anyone else studying grizzly bears? For example, are the authors proposing that the estimated 12% reporting rate be used in conjunction with the CI database to estimate total human-caused mortality? It would appear so, given lines 141-151, where they do just that. Again, I am not claiming that this is wrong, but perhaps the authors could provide an argument for why they chose not to directly estimate reporting rate to obtain the estimate of total mortality. Are they, perhaps, suggesting that the estimated reporting rates obtained from a radio-marked sample are biased high? If so, then I think this point might be made more strongly. In fact, perhaps this is where this manuscript needs to go, to make it more useful for biologists working in other areas.

I think the math used to derive the 12%, 18%, and 5% reporting rates (total, male, female, respectively) should be more clearly described in the results and/or shown in the table. I don’t think it should be left to the reader to do the math. For example, “While the ratio between the number of legal hunter kills to non-hunting kills was 0.833 for the collared sample when the suspected kills were included, it was 7.0 (70:10) for the uncollared bears in the CI database. Assuming the collared sample is the true ratio, by applying the ratio to the number of uncollared legal hunter kills (70/0.833), we derive an estimate of 84 non-hunter kills. Since only 10 uncollared non-hunter kills were reported, this would indicate that only 12% of the non-hunting human-caused mortalities were in the CI database.” I also think the authors should clearly state that they are applying the observed ratio (from collared bears) to just the uncollared bears, and explain why. Again, if this estimate is meant to be applied to real world data for management purposes, it would be instructive to explain how data obtained from collars should be treated.

Reporting rate varies by cause of death. Although the authors touch on this, they are not particularly direct about it. It would be nice if the causes of death among radio-collared deaths were reported, along with whether or not they were reported. Without that, I can only surmise from lines 167-181 that most of the mortality was either illegal or natural. It is completely expected that reporting rate would be low for those causes of death and low reporting rates have been previously reported (Costello et al. 2016 - grizzly bear demographics in the Northern Continental Divide Ecosystem).

In summary, I think this manuscript could be greatly improved by comparing the radio-marked reporting rate to the estimated reporting rate obtained using this current method, followed by discussion of the factors that might account for any differences. Another option is evaluate the estimated reporting rate with other population analyses. In its current form, the manuscript provides little that is useful for biologists working in other areas, perhaps even within BC.

All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.