REVIEW
Article: Twisted tale of the tiger: the case of inappropriate data and deficient science.
INTRODUCTION
The article titled “Twisted tale of the tiger: the case of inappropriate data and deficient science” although written as an independent article submitted for peer review, appears to be an attempted rebuttal of a publication by Gopalaswamy et al. (2015a,b). The authors create an initial impression that they are exposing a case of scientific fraud, by particularly referring to Gopalaswamy et al. (2015a,b). However, subsequently, their various claims fail the test of careful scientific scrutiny because of weak, incomplete and contradictory arguments they advance in support of these claims. A discerning reader will note that many of the arguments they advance, in fact, buttress rather than weaken the findings reported by Gopalaswamy et al. (2015a,b). At the end of a careful reading of the entire manuscript, virtually all the arguments advanced by the authors with the objective of critiquing Gopalaswamy et al. (2015a,b) reinforce central findings of that study, leaving one quite baffled by their very purpose for launching such a flamboyantly titled critique.
DETAILED COMMENTS
l.39: The purpose of italicizing the words ‘experimentation’ and ‘rigorous scrutiny’ is unclear and misleading.
l.41: It will be important to define the word “Science” here. Many view `science’ as only a method to generate knowledge. Readers may prefer the word “Knowledge” or “Scientific knowledge” in this sentence here to suit the word `fabricated’ that appears later in the sentence.
l.42-46: The authors make sweeping claims about inadequacies of the very process of scientific peer review, attributing these to (a) reputation of the authors (b) vested interests of reviewers (c) failure of peers to check the authenticity of data. Readers would like to see these specific claims backed by citations/studies to broaden the scope of these hypothesized problems.
l. 46: At this point the authors should clarify the basic intention and question they want to pose as an explicit scientific hypothesis in order to assist the readers to keep track of their subsequent arguments they advance. This would be specifically relevant since they intend to discuss different types of inadequacies in the peer review process (l.42-46).
l. 46-52: I would recommend this section on Florida panthers be moved to the next paragraph, supported by more examples, relevant to the subject of this article and to bolster earlier claims about ‘fabrication of science’ (Martinson et al. 2005). The Florida panther example that the authors discuss here resulted from a combination of poor science and bad policy. Readers will be curious to know what the authors are intending to imply through this example. For example, scientists may ask poor questions, adopt poor methodologies or gather poor data, either singly or in combination, resulting in bad science and try to publish their results in peer-reviewed journals. Such studies may also make policy recommendations based on flawed results. If such recommended policies are adopted uncritically, does the responsibility rest on the scientists or the policy makers or both? Readers would be keen to know if the authors are using the Florida panther example to highlight the practice of ‘fraudulent science’ (as implied by authors in l. 42-43) or of ‘bad science’ (poor questions/poor data/poor methods) or of bad policy-making (as referred to by authors in l.50-52). I would recommend that the authors explicitly state the specific question(s) and objective(s) they pursue in the manuscript.
l. 52: Recently, Darimont et al. (2018) coined the term “political populations” referring to exaggerated claims made by government agencies about successes in large carnivore population recoveries because of vested interests. Their study cites examples of serious mismatch between official claims and results of scientific studies of population dynamics of political species such as wolves, brown bears and tigers. This manuscript would benefit from citing and summarizing findings of Darimont et al. (2018), which also deals with large carnivore population assessments at large spatial scales. Readers will note that Darimont et al. (2018) demonstrate the need for independent investigators to verify claims made by official agencies on large carnivore population dynamics, supporting the ideas of Martinson et al. (2005), thus it making it very relevant to this particular manuscript. Furthermore, to prove their case for prevalence of scientific malpractice in ecology and conservation biology (as mentioned in lines 41-46), the authors should provide more specific examples of such cases to justify that this is an important problem for the scientific community to address.
l. 57: The main part of the article (critique of Gopalaswamy et al. (2015)) begins very abruptly here. Readers would be keen to know the broader context within which Gopalaswamy et al. (2015) gains importance, especially with respect to tigers (as demanded by the title) and India’s national tiger assessments. The authors should summarize the broader context for the readers to appreciate the context of this manuscript. Some relevant points include:
(1) India’s tiger monitoring: As in the case of the Florida panther example, the authors can explain how India’s previous approach of monitoring tigers (via the “pugmark census” method) failed to detect the extinction of tigers from two important tiger reserves in India (Sariska and Panna) (Yumnam et al. 2014). And how the extinction of tigers from Sariska prompted the formation of the Tiger Task Force (Tiger Task Force 2005), which mandated shift in the way tiger populations are monitored. This background information will enable readers to appreciate the striking similarity between the Florida panther case and the India tiger monitoring case (that is subsequently explored at length). It would also be helpful for readers to know, at least briefly, what the subsequent modified protocols for monitoring tigers in India’s national tiger assessments entailed (from Jhala et al. 2008, Jhala et al. 2011b, Jhala et al. 2015). This becomes relevant since this manuscript aims to connect the concepts discussed in this manuscript with these national tiger assessments.
(2) Next, the authors may describe how India claimed a 30% tiger population rise (from 2010 to 2014) in January 2015. They can then summarize the claims made by Gopalaswamy et al. (2015a), which occurred a month later and somewhat undermined Indian government’s claim of tiger population rise. This will provide the context upon which Gopalaswamy et al. (2015a) gains importance. The authors can also write about the immediate reactions to Gopalaswamy et al. (2015a). Particularly, the authors can mention how scientists and officials affiliated to the Indian government wrote to the journal (Methods in Ecology and Evolution, a journal of the British Ecological Society) and demanded that the article (Gopalaswamy et al. 2015a) be summarily withdrawn (without offering a formal rebuttal)(Vishnoi 2015, Kempf 2016). Since, one of the claims of the manuscript (in l.42-46) is that the peer review process itself is not foolproof at detecting fraudulent science readers would be curious to know how the journal dealt with the situation given that this issue was brought to their attention by the dissenting scientists and officials. Particularly, readers would be interested to know (a) whether the journal had in place any system (eg: COPE guidelines) to tackle such situations and (b) whether the journal made an offer to the dissenting scientists/officials to publish a rebuttal then and what the scientists/officials did subsequently. Further, if the authors of this manuscript support the approach taken by the dissenting scientists as a radically new alternative to the conventional system of peer-reviewed rebuttals and responses, then I would recommend that the authors frame this as a valid alternative approach of advancing science and use findings of this manuscript to strengthen or weaken their claim.
(3) After (1) and (2), which should present the context adequately to the readers to judge, a reiteration of what the authors are setting out to prove is necessary to keep the focus of the manuscript sharp. For example, the authors can explicitly state that they will demonstrate how the peer review process actually failed to detect the fraudulent scientific practices they attribute to Gopalaswamy et al. (2015a,b). Thus the stage would be set for clearly articulating the key arguments of this manuscript for readers.
l.59: The authors cite Gopalaswamy et al. (2015) and in the reference they only refer to the following citation (corresponding to Gopalaswamy et al. (2015a)):
Gopalaswamy, A. M., Delampady, M., Karanth, K. U., Kumar, N. S, and Macdonald, D. W., An examination of index-calibration experiments: counting tigers at macroecological scales. Meth. Ecol & Evol., 2015, 6, 1055-1066, doi:10.1111/2041-210X.12351.
But, there is a Corrigendum also published in the same journal in the same year. In this Corrigendum (Gopalaswamy et al. 2015b) that authors correct a couple of algebraic errors in one of the six mathematical derivations:
Gopalaswamy, A. M., Delampady, M., Karanth, K. U., Kumar, N. S, and Macdonald, D. W., Corrigendum. Meth. Ecol & Evol., 2015, 6, 1067-1068, doi:10.1111/2041-210X.12400.
It is possible that these authors may not have read this Corrigendum, thereby rendering many of the arguments they present now invalid or obsolete. Given that their main objective is to demonstrate that the publication by Gopalaswamy et al. (2015) is an outcome of fraudulent scientific practice, authors appear not to have done the necessary background research to mount their scientific critique. This will lead to the counter criticism that they are cherry-picking references to attack a straw man.
l.57-66: This part confirms the speculation that the authors have not fully read or understood what Gopalaswamy et al. (2015b) presented in their publication.
Further, if one were to start reading Gopalaswamy et al. (2015a,b), they will realize that these are papers that fundamentally develop index-calibration relationships, mathematically, by utilizing the binomial and beta-binomial models. And consequently derive the R2 parameter for the two cases and also consider special bounded cases in these derivations. So the scope of these derivations would apply, if relevant, to any scientific context.
It is only in the second part of the paper that they use these mathematical results to assess outcomes from some tiger surveys. As such there is no ‘validation’ as claimed by the authors, so readers will find mismatch between how the authors summarize Gopalaswamy et al. (2015a,b) and how these papers are actually written.
l.61-63: The interchangeable use of the two technically different terms ‘index-calibration’ and ‘double-sampling’ by the authors further clouds their arguments and will likely confuse the reader. I recommend that the authors first clarify the difference between the two terms. Thereafter, authors can summarize the conclusions of Gopalaswamy et al. (2015a,b) that they are hoping to undermine. Any reader who reads Gopalaswamy et al. (2015a,b), will notice that apart from historical mentions in the introduction about double-sampling (e.g. Eberhardt and Simmons 1987, Pollock et al. 2002), the entire article is about assessing the validity of R2-based index-calibration experiments. In this context it is not clear why the authors repeatedly use the term double-sampling as an argument in support of their critique. In fact, if at all anything about double-sampling can be stated, it can be said that both Gopalaswamy et al. (2015a,b) and Jhala et al. (2015) demonstrate how daunting a task it is to practically conduct double-sampling for tiger landscapes.
l. 65-66: In the context of Gopalaswamy et al. (2015a,b), if anyone plug values into their formulae, the findings of Jhala et al. (2011a) under favorable conditions (high and non-variable p among a few other parameters in the R2 formulae) can be theoretically reproduced. Therefore, the sentence 65-66 is a technical inaccuracy in the manuscript. Further, Gopalaswamy et al. (2015a,b), conclude that the estimate of the R2-statistic for the IC-Jhala (from Jhala et al. 2011a) experiment as “anomalously high” and the estimate of the R2-statistic for the IC-Karanth (from Gopalaswamy et al. 2015a,b) experiment as “anomalously low”.
l. 65: If the authors are attempting to demonstrate fraudulent scientific practices of Gopalaswamy et al. (2015a,b) by using the study of Jhala et al. (2011a) as a basis, the following issues immediately arise and must first be addressed before they proceed:
• The theoretical basis of the entire Jhala et al. (2011a) experiment seems to rest solely on an unproven assumption that sign encounter rates will stabilize when tracks are recorded at sampling lengths of 4-5 kilometers and cite an unpublished pilot study (Jhala and Qureshi, unpublished). What Jhala et al. (2011a) mean by the word ‘stabilize’ is unclear.
• The model thus developed has no ecological or sampling theory basis to justify it.
• There is a large amount of multicollinearity in the fitted model of Jhala et al. (2011). In fact, Jhala et al. (2011) inadvertently reveal this. They find that by removing the track encounter rates their regression coefficients on scat encounter rates jumps by 63% demonstrating how unstable their fitted regression model really is.
• Despite having tiger sign encounter rate data for 29 sites across India (Jhala et al. 2011b), Jhala et al. (2011a) selectively delete all 8 sites from southern India offering no justification for doing so. Given that the authors repeatedly defend the concept of double-sampling in this manuscript (Eberhardt and Simmons 1987, Pollock et al. 2002), this selective sampling approach taken by Jhala et al. (2011a) will severely undermine all the explanations about double-sampling in the manuscript and further strengthen conclusions of Gopalaswamy et al. (2015a,b).
More pertinently, if readers contextualize the Jhala et al. (2011a) study in light of the recent national tiger assessment (Jhala et al. 2015) they will realize that Jhala et al. (2015) squarely contradicts the findings of Jhala et al. (2011a). Conversely, if the authors begin their arguments against Gopalaswamy et al. (2015a,b) by justifying the robustness of Jhala et al. (2011a) then it inevitably means they are refuting Jhala et al. (2015). These contradictions would invite two other pertinent questions to the reader: (1) Jhala et al. (2011a) was also published in a peer-reviewed journal (Journal of Applied Ecology), so why shouldn’t the questions being asked of Gopalaswamy et al. (2015a,b) by the authors be asked about Jhala et al. (2011a) as well, especially in the face of the above factors? (2) As the Jhala et al. (2011a) also had an influence on India’s national tiger assessment (Jhala et al. 2011b), why did the scientists and officials affiliated with the Indian government not react in the same way as they did when Gopalaswamy et al. (2015a,b) came out? These discussions will establish the link with Darimont et al. (2018).
l. 71-83: If the difference between double-sampling and index-calibration is introduced previously (recommended) the explanation here is redundant.
l.84-112: All the derivations in Gopalaswamy et al. (2015a,b) for the binomial and beta-binomial models use the individual detection probability p, so the authors will need to clearly spell out what they mean. Further, in line 89 they define p as the number of surveys (which is a count) and not a probability and again in line 95 they change course again and call it a probability. This would leave the readers confused. The authors then introduce another source of variation (sign-level detection probabilities r) and discuss in barely three lines as to how to potentially estimate them (by a double-observer survey), but do not conduct any study themselves to try this or demonstrate whether and how it works. In their example, they indicate a large source of variation in r (r=0.1 and r=0.9) and claim that Gopalaswamy et al. (2015a,b) cannot capture these variations in r. The authors will need to demonstrate using explicit statements of probability, their relationships to the equations derived in Gopalaswamy et al. (2015a,b) and fresh derivations of the R2 parameter to prove their point. At the least, it will require the authors to make use of a thorough simulation exercise to prove their claim.
But, once again, the claim itself will perplex a discerning reader. Because, the introduction of one more source of variation at the sign-level using r and the associated unaccounted heterogeneity of this probability implies a greater deal of overdispersion. So this argument will further buttress claims made in Gopalaswamy et al. (2015a,b), and consequently launches further doubts about the inferences emerging from the national tiger assessments (Jhala et al. 2008, Jhala et al. 2011b, Jhala et al. 2015) as well as Jhala et al. (2011a). By this point readers will wonder why the authors are so strongly supporting the claims of Gopalaswamy et al. (2015a,b), when in fact, the stated purpose of this manuscript was to critique them. What are we missing here?
Now, in lines 104-107, the authors discuss a “parameter unidentifiability” issue between occurrence and detection. They first need to clarify whether they are referring to occurrence (MacKenzie et al. 2002) or local occurrence (Hines et al. 2010). This is a relevant topic of discussion, especially with respect to the national tiger assessment (Jhala et al. 2011b), the results of which are used in Gopalaswamy et al. (2015a,b) for their analysis. Jhala et al. (2011b) use a primitive version of the occupancy model (MacKenzie et al. 2002) compared to the more relevant Hines et al. (2010) model, and it is in MacKenzie et al. (2002) model where this unidentifiability issue is inherent. So, here too it is the national tiger assessment (Jhala et al. 2011b) that the authors are targeting. This will further confuse readers about which study or the set of studies are the authors criticizing here.
Further, in line 106, the authors talk about the importance of teasing apart scats and tiger pugmarks. Once again, Jhala et al. (2011a) demonstrate that there is a high degree of multicollinearity between tiger tracks and scats, and also show that the best model to predict tiger density involves tracks and scats. So, by saying it is vital to tease these two apart, the authors imply that they are contradicting the findings of Jhala et al. (2011a). Once again, the readers are left confused as to which study is being targeted here.
l. 115-131: India’s national tiger assessment (Jhala et al. 2011b) was a massive nationwide exercise involving 477000 man days of effort by the forest staff and 37000 man days of effort by professional biologists. Results from this survey and Karanth et al. (2011a) have been used in Gopalaswamy et al. (2015a,b). Here, the authors indirectly imply that the countrywide estimates from Jhala et al. (2011b) should not have been used in the analysis of Gopalaswamy et al. (2015a,b), but instead they should have used results from two other smaller scale surveys (Harihar and Pandav 2012, Barbara-Meyer et al. 2013). Here, the authors must clarify why they do not place much faith in the estimates of Jhala et al. (2011b) as this is most indicative of the population estimate of parameters (in this case, of p).
I do agree with the authors that there is an inherent parameter identifiability issue with the analysis from Jhala et al. (2011b) because they relied on the older MacKenzie et al. (2002) model. But when the equivalent, identified, probabilities are combined from Karanth et al. (2011a), since Karanth et al. (2011a) uses the more advanced Hines et al. (2010) model, the detection probability estimates between Jhala et al. (2011b) and Karanth et al. (2011a) are quite comparable (see the table of detection probabilities in Gopalaswamy et al. 2015a,b). So, it is not justifiable to discard estimates of such a large, nationwide, effort (Jhala et al. 2011b) without sound reason even though we recognize that analysis from Jhala et al. (2011b) emerges from an outdated model.
But, for the moment, suppose we take the estimates of segment-level p from Harihar and Pandav (2012) and Barbara-Meyer et al. (2013) as proposed by the authors. The authors indicate that their estimates of p from those studies are constant and high as follows: Harihar and Pandav (2012), p=0.951 (SE 0.05) and Barbara-Meyer (2013), p=0.65 (SE=0.08). If readers refer to Harihar and Pandav (2012) and Barbara-Meyer (2013) they will find that this statement (in lines 126-127) is misleading. This is because both Harihar and Pandav (2012) and Barbara-Meyer et al. (2013) show that detection probabilities in fact vary over a range of values (from low to high) and is not a constant as stated in this manuscript. Harihar and Pandav (2012) show that the segment-level detection probability in THB-1 itself, in fact, varies from 0.179 to 0.282 to 0.386 and then jumps to 0.947 in THB-2. Such a large variation in p, actually indicates that Cv assumption of 1 made in Gopalaswamy et al. (2015a,b) is perhaps too conservative. Similarly, Barbara-Meyer et al. (2013) too show a huge variation in detection probability (from 0.22 for observers with less experience to 0.73 for observers with high experience). So, now if the estimates of Jhala et al. (2011b) is ignored (owing to the parameter unidentifiability issue) and only studies where the parameters are fully identified are considered (Harihar and Pandav 2012, Barbara-Meyer et al. 2013 and Karanth et al. 2011a) the estimate of Cv will likely exceed the conservative estimate taken in Gopalaswamy et al. (2015a,b) and further strengthen the claims of Gopalaswamy et al. (2015a,b), particularly in the context of the tiger example.
So, this example provided by the authors will further perplex the reader because the authors’ arguments continue to strengthen Gopalaswamy et al. (2015a,b) though their stated objective was to demonstrate fraudulent scientific practice in Gopalaswamy et al. (2015a,b). Yet, despite strengthening the conclusions of Gopalaswamy et al. (2015a,b), they continue to state the opposite in the text.
It also appears that the authors are getting confused about which Cv they must use for the analysis. The Cv considered in Gopalaswamy et al. (2015a,b) indicates the actual variation in detection probability when modeled against covariates and is not referring to the estimation standard error indicated by the author.
l. 128-131: The authors will need to find support for this off-handed statement with scientific data. The consequences of this statement (if true) will be very relevant to the national tiger assessments because: (1) this statement is indicative of parameter covariation (as discussed in Gopalaswamy et al. (2015a,b)), thereby further supporting Gopalaswamy et al. (2015a,b) and undermining the estimates from all three national tiger assessments (Jhala et al. 2008, Jhala et al. 2011b, Jhala et al. 2015), (2) combined with the parameter unidentifiability issue discussed above, this will likely imply that the national tiger assessment (Jhala et al. 2011b) may be an overestimate but this will need to be fully worked out. But again, the authors seem to further support the findings of Gopalaswamy et al. (2015a,b), which defeats the objective of the manuscript.
l. 133-181: Once again, what the authors aim to achieve with this bit of analysis is unclear.
Gopalaswamy et al. (2015a, b) have stated that they found the estimate of the R2 from the IC-Karanth index-calibration model to be “anomalously low” and attribute this to poor sample size (n=8) and unrepresentative selection of sites. This finding logically implies that using this model (IC-Karanth) will lead to faulty predictions.
Now, the authors of this manuscript take a detailed look into the data points used in the IC-Karanth index-calibration experiment and identify potential reasons as to why Gopalaswamy et al. (2015a,b) do not place faith on this model. They hypothesize that because of the time lag between some of the tiger sign index surveys and the tiger density surveys, combined with the natural tiger density fluctuations (Karanth et al. 2006), track encounter indices used in Gopalaswamy et al. (2015a,b) may not accurately mirror true densities at a given point in time. So, effectively, the authors are stating that in addition to the issues identified by Gopalaswamy et al. (2015a,b), i.e. of poor sample size and unrepresentative selection of sites, there is also a problem of time lag and confirm the findings of Gopalaswamy et al. (2015a,b) that the IC-Karanth index-calibration model is unreliable for making predictions.
In their analysis, to demonstrate that time lag effects could be an additional factor, they further sacrifice the sample size (from n=8 to n=4), by only considering what they classify as ‘legitimate data’ in the IC-Karanth index-calibration model, and they arrive at an estimated R2 = 0.642. At this point, it is necessary for the authors to explain why they believe this result is significant. And particularly, as to why they believe this result demonstrates the fraudulent scientific practice in Gopalaswamy et al. (2015a,b).
Because, this further analysis brings to light some new questions:
(1) Are tiger densities fluctuating with such high amplitudes as indicated by these time-lagged data? If this is true, then it would immediately bring into question India’s claims of tiger population rise, which have been reported to be steadily increasing (Jhala et al. 2008, Jhala et al. 2011b, Jhala et al. 2015).
(2) As with any sample statistic, when the sample size is reduced, the variance around the estimated R2 will increase. So by changing sample size from n=8 to n=4, they increase the variance in the R2 statistic.
(3) Finally, all the arguments presented by the authors in l. 133-181 become completely irrelevant when confronted by the findings reported by Jhala et al. (2015) which strongly validates the findings of Gopalaswamy et al. (2015a,b).
l. 186-201: I would submit, contrary to the authors averments, Gopalaswamy et al. (2015a,b) study demonstrates exactly what the authors are alluding to here. There is a parameter Q in the derivation of the population parameter R2, which captures the effect of the sampling error around the estimates of density. The value of Q computed in Gopalaswamy et al. (2015a,b) is lower for the model of Jhala et al. (2011a) in comparison with that of Karanth et al. (2004). Thus, once again, authors are mathematically validating the findings of Gopalaswamy et al. (2015 a,b) in contrast to their verbal rhetoric against the study.
l. 204-209: In March 2011, India officially announced its tiger population estimates following the 2010 survey (http://pib.nic.in/newsite/PrintRelease.aspx?relid=71310). These results were then downloadable at the following link: (http://pib.nic.in/archieve/others/2011/mar/d2011032801.pdf). The Science article (Karanth et al. 2011b; published in May 2011) referred by the authors cites this source. However, after publication of Karanth et al. (2011b), this link was disabled and after July 2011 official communications started citing another report (Jhala et al. 2011b) based on the same, 2010, field survey. Since, neither the earlier result (announced in March 2011) nor the later report (Jhala et al. 2011b) are peer-reviewed documents, it is difficult to ascertain which of the two results is the reliable source. Since the estimates differed greatly between these two results, this led to much confusion in the public domain. Given that Karanth et al. (2011b) was published before the July 2011 report (Jhala et al. 2011b), they cannot be held accountable for this uncertainty. In that context, the question about the ‘49% increase in tiger density in 4 years’ should ideally be posed to the authors of the national tiger assessment of the 2010 survey. The authors of this manuscript can bring about this point discussion in the manuscript, particularly in light of Darimont et al. (2018)’s observation about political populations.
l.223-225: If readers go through the estimates of national tiger assessments reported in Gopalaswamy et al. (2015a,b) they will find that the estimates are borrowed from Jhala et al. (2011b) pertaining to the national tiger assessment of 2010. So, to say Gopalaswamy et al. (2015a,b) use estimates from national tiger assessment of 2014 (Jhala et al. 2015) will be factually wrong.
l. 230-242: Here the authors of the manuscript refer to grey literature such as popular articles and statements to the media to claim that they drastically changed their tiger survey methodologies between the two efforts in 2010 and 2014. This claim was accepted by Harihar et al. (2018) to advance an argument that any valid claims about tiger population changes could not be made using comparisons between figures from the two surveys. Consequently, this argument will actually validate further the remarks made in the press and press releases.
l. 243-268: Once again press statements and opinions cited which somewhat loses relevance in a scientific manuscript. But from all the arguments discussed in this manuscript, even these press statements seem more strengthened now.
l. 269-273: These statements show that authors have not read the Corrigendum by Gopalaswamy et al. (2015b) which shows the correction of the algebraic error in Gopalaswamy et al. (2015a) which was suspected in the blog. But, this correction doesn’t change the fundamental inferences about the invalidity of tiger index-calibration experiments, which was later confirmed by Jhala et al. (2015).
RECOMMENDATIONS
1. Most of the arguments presented in the manuscript strengthen rather than weaken the scientific findings of Gopalaswamy et al. (2015a,b). However, the polemical style of writing may make statistically unsophisticated readers believe that this manuscript is a strong expose of flaws in Gopalaswamy et al. (2015a,b). Hence, I would suggest a complete rewrite of this manuscript, removing all the polemics and presenting purely, scientific and fully developed arguments.
2. I would recommend that the authors clearly define the key scientific question they wish to seek an answer for, use appropriate methods to answer them and gather sufficient data to strengthen their answer. Through this process of answering clear scientific questions they can of course criticize or support any previous scientific findings and arguments. Authors’ current stated aim of proving some sort of a scientific fraud merely with the use of rhetoric, but backed only by a morass of contradictory scientific arguments, does not merit publication in a scientific journal.
3. Because the manuscript is a rebuttal rather than an original article, instead of being revised, it can also be sent for publication as a formal rebuttal to the same journal in which (Gopalaswamy et al. 2015a,b) reported their original analyses (i.e. Methods in Ecology and Evolution). Alternatively, PeerJ can also publish this rebuttal as is and invite the original authors of Gopalaswamy et al. (2015a,b) to write a formal Response. This option would enable the authors of the original article to respond to various criticisms, including claims of possible scientific fraud leveled in this manuscript in sufficient and necessary detail so that the scientific and conservation communities are kept informed about the debate on a major theme: validity of index-calibration in the context it is applied to monitor India's tigers.
REFERENCES:
Gopalaswamy, A. M., Delampady, M., Karanth, K. U., Kumar, N. S, and Macdonald, D. W., An examination of index-calibration experiments: counting tigers at macroecological scales. Meth. Ecol & Evol., 2015a, 6, 1055-1066, doi:10.1111/2041-210X.12351.
Gopalaswamy, A. M., Delampady, M., Karanth, K. U., Kumar, N. S, and Macdonald, D. W., Corrigendum. Meth. Ecol & Evol., 2015b, 6, 1067-1068, doi:10.1111/2041-210X.12400.
Martinson, B. C., Anderson, M. S., and de Vries, R., Scientists behaving badly. Nature, 2005. 435, 737-738.
Darimont, C. T. et al. (2018). Political populations of large carnivores. Conservation Biology (In press).
Yumnam, B., Jhala, Y. V., Qureshi, Q., Maldonado, J. E., Gopal, R., Saini, S., … Fleischer, R. C. (2014). Prioritizing tiger conservation through landscape genetics and habitat linkages.PLoS ONE. http://dx.doi.org/10.1371/journal.pone.0111207.
Tiger Task Force (2005). Joining the Dots. Project Tiger, Union Ministry of Environment and Forests, Government of India.
Jhala, Y., Qureshi, Q. and Gopal. R., Status of tigers, copredators and prey in India 2006 (Eds). National Tiger Conservation Authority, New Delhi and Wildlife Institute of India, Dehradun. 2008, TR 08/001 pp.164.
Jhala, Y., Qureshi, Q., and Gopal, R., Can the abundance of tigers be assessed from their signs? J. App. Ecol., 2011a, 48, 1, 14-24.
Jhala, Y., Qureshi, Q. and Gopal, R., Status of tigers, copredators and prey in India 2010. (Eds). National Tiger Conservation Authority, New Delhi and Wildlife Institute of India, Dehra Dun, 2011b, TR 2011/003, pp. 302.
Jhala, Y., Qureshi, Q. and Gopal, R. Status of tigers, copredators and prey in India 2014. (Eds). National Tiger Conservation Authority, New Delhi and Wildlife Institute of India, Dehradun, 2015, TR 2015/021, pp, 456.
Vishnoi, A., Government seeks withdrawal of research paper questioning tiger population, Economic Times, 2015, http://economictimes.indiatimes.com/news/economy/policy/government-seeks-withdrawal-of-research-paper-questioning-tiger-population/articleshow/47021123.cms
Kempf, E. (2016). Far from recovering tigers may be in worst decline in a Century.
www.newscientist.com/article/2090507-far-from-recovering-tigers-
may-be-in-worst-decline-in-a-century/ Viewed on 1 September 2016.
Eberhardt, L. L. and Simmons, M. A., Calibrating population indices by double sampling. J. Wildl. Manag, 1987, 51, 665-675.
Pollock, K. H., Nichols, J. D., Simons, T. R., Farnsworth, G. L., Bailey, L. L. and Sauer, J. R., Large scale wildlife monitoring studies: statistical methods for design and analysis. Environmetrics, 2002, 13, 2, 105-119.
MacKenzie, D. I., Nichols, J. D., Lachman, G. B., Droege, S., Royle, J. A., Langtimm, C. A., Estimating Site Occupancy Rates with Detection Probabilities are Less than One. Ecology, 2002, 83, 8, 2248-2255.
Hines, J. E., Nichols, J. D., Royle, J. A., MacKenzie, D. I., Gopalaswamy, A. M., Kumar, N. S., Karanth, K. U., Tigers on Trails: occupancy modeling for cluster sampling. Ecological Applications, 2010, 20, 5, 1456-1466.
Harihar, A. and Pandav, B. Influence of Connectivity, Wild Prey, and Disturbance on Occupancy of Tigers in the Human-Dominated Western Terai Arc Landscape. PLoS ONE, 2012, 7, 7, e40105, doi:10, 1371/journal.pone.0040105.
Barbara-Meyer, S. M., Jnawali, S. R, Karki, J. B., Khanal, P., Lohani, S., Long, B., et al., Influence of prey depletion and human disturbance on tiger occupancy in Nepal. Journal of Zoology, 2013, 289, 1, 10-18.
Karanth, K. U., Nichols, J. D., Kumar, N. S., Link, W. Tigers and their prey: predicting carnivore densities from prey abundance. Proc. Nat. Acad Sci, 2004, 101, 4854-4858.
Karanth, K. U., Nichols, J. D., Hines, J. E., Kumar, N. S. Assessing tiger population dynamics using photographic capture-recapture sampling. Ecology, 2006, 87, 2925-2937.
Karanth, K. U., Gopalaswamy, A. M., Kumar, N. S., Vaidyanathan, S., Nichols, J. D., MacKenzie, D. I. Monitoring carnivore populations at the landscape scale: occupancy modeling of tigers from sign surveys. Journal of App Ecol, 2011a, 48, 1048-1056.
Karanth, K. U., Gopalaswamy, A. M., Kumar, N. S., Delampady, M., Nichols, J. D., Seidensticker, J., Noon, B. R., Pimm, S. L. 2011b. Counting India’s Wild Tigers Reliably. Science: 332, 791.
Harihar, A. et al. (2018). Defensible inference: questioning global trends in tiger populations. Conservation Letters 10: 502-505.