Twisted tale of the tiger: the case of inappropriate data and deficient science
- Published
- Accepted
- Subject Areas
- Conservation Biology, Ecology, Ethical Issues, Natural Resource Management, Population Biology
- Keywords
- Double sampling, Ethics, Index calibration, Large scale surveys, Tiger status estimation, Use of inappropriate data
- Copyright
- © 2018 Qureshi et al.
- Licence
- This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, reproduction and adaptation in any medium and for any purpose provided that it is properly attributed. For attribution, the original author(s), title, publication source (PeerJ Preprints) and either DOI or URL of the article must be cited.
- Cite this article
- 2018. Twisted tale of the tiger: the case of inappropriate data and deficient science. PeerJ Preprints 6:e27349v1 https://doi.org/10.7287/peerj.preprints.27349v1
Abstract
Publications in reputed peer reviewed journals are often looked upon as tenets on which future scientific thought is built. Sometimes published information can be flawed and errors in published research should be expediently reported, preferably by a peer review process. We review a recent publication by Gopalaswamy et al (2015) that challenges the use of double sampling in large scale wildlife surveys. Double sampling is often resorted to as an established economical approach for large scale surveys since it calibrates abundance indices against absolute abundance, thereby elegantly addressing the statistical shortfalls of indices. Empirical data used by Goplaswamy et al. (2015) to validate their theoretical model, relate to tiger sign and tiger abundance referred to as an Index Calibration experiment (IC-Karanth). These data on tiger abundance and signs should be paired in time and space to qualify as a calibration experiment for double sampling, but original data of IC-Karanth show lags of several years. One crucial data point used in the paper does not match the original source. We show that by use of inappropriate and incorrect data collected through a faulty experimental design, wrong parameterisation of their theoretical model, and cherry-picked estimates from literature on detection probability, the inferences of this paper are highly suspect. We highlight how the results of Goplaswamy et al. (2015) were further distorted in popular media. If left unaddressed, Gopalaswamy et al. paper could have serious implications on design of large scale studies by propagating unscientific inferences.
Author Comment
This is a submission to PeerJ for review.
Supplemental Information
Original data used by Gopalaswamy et al 2015 for their Figure 5, as published in the cited sources
Original data used by Gopalaswamy et al 2015 for their Figure 5, as published in the cited sources.
Supplement 1
Screenshots of original publications highlighting data discrepancies in Gopalswamy et al 2015.