Review History


All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.

Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.

View examples of open peer review.

Summary

  • The initial submission of this article was received on June 13th, 2022 and was peer-reviewed by 2 reviewers and the Academic Editor.
  • The Academic Editor made their initial decision on August 10th, 2022.
  • The first revision was submitted on May 31st, 2023 and was reviewed by 2 reviewers and the Academic Editor.
  • The article was Accepted by the Academic Editor on July 4th, 2023.

Version 0.2 (accepted)

· Jul 4, 2023 · Academic Editor

Accept

Authors have addressed all the comments from the reviewers. The paper is recommended to be accepted in its current form.

[# PeerJ Staff Note - this decision was reviewed and approved by Daniel S. Katz , a PeerJ Computer Science Section Editor covering this Section #]

Reviewer 1 ·

Basic reporting

the paper is in acceptable format.

Experimental design

the paper is in acceptable format.

Validity of the findings

the paper is in acceptable format.

Additional comments

the paper is in acceptable format.

·

Basic reporting

See below

Experimental design

See below

Validity of the findings

See below

Additional comments

I have finished the review. I found all my comments have been answered by the authors and the last version has been updated taken into account my recommendation. The submission can be accepted to publish.

Version 0.1 (original submission)

· Aug 10, 2022 · Academic Editor

Major Revisions

Based on the comments of both reviewers, I recommended "Major revisions" for this paper. Authors should carefully address all the comments from the reviewer in particular to enhance the discussion of experimental results and setup.

[# PeerJ Staff Note: Please ensure that all review comments are addressed in a rebuttal letter and any edits or clarifications mentioned in the letter are also inserted into the revised manuscript where appropriate. It is a common mistake to address reviewer questions in the rebuttal letter but not in the revised manuscript. If a reviewer raised a question then your readers will probably have the same question so you should ensure that the manuscript can stand alone without the rebuttal letter. Directions on how to prepare a rebuttal letter can be found at: https://peerj.com/benefits/academic-rebuttal-letters/ #]

Reviewer 1 ·

Basic reporting

The paper proposes a deep learning-based biodiversity analysis method using the aerial platform (UAV) instead of stationary platforms in real-time. They created a testbed to train and test the proposed method, onboard object detection, and evaluate its performance

Experimental design

Biodiversity analysis is important for mitigating the impact of climate change on the ecology and taking steps to reverse biodiversity loss by analyzing data collected from individual ecosystems in real-time. Traditional biodiversity monitoring techniques have challenges, such as utilizing stationary platforms for data acquisition and continuous updating. Moreover, the current studies rely on the census of the individual population using the capture, tag-mark, and recapture technique, which is labor-intensive, expensive, and time-consuming. This study proposed the deep learning-based method implemented on high-resolution spatial images or live footage using the aerial platform (UAV) for creating a real-time biodiversity map to address the above challenges. A testbed is designed to train and test the proposed method and onboard object detection. The authors provided outdoor and indoor test results to evaluate the proposed solution for real-time biodiversity analysis.

Validity of the findings

The idea of the paper and the proposed solution is convincing. However, this paper still has some flaws; hence revisions are required. The suggestions are listed as follows: - The authors stated that data acquisition and handling are challenging in stationary platforms, and they proposed a solution using the aerial platform to overcome their current challenges. However, the challenges of data analysis in stationary platforms are not clear. The challenges need to explain in detail. - It is mentioned that the quality of the collected data will improve by automating the entire data collection task with aerial platforms. This part is not clear. If the said the collection of high-quality images, it could also be collected through fixed platforms. It can be explained in more detail. - The state-of-the-art image recognition works did not address in the existing research section. This section can be extended by adding state-of-the-art image recognition works. - In this work, the YOLO algorithm is used for biodiversity analysis. There are many deep learning algorithms for real-time object detection such as R-CNN, R-FCN, etc. It is not clear why the YOLO algorithm was chosen. It can be clarified in more detail. - The specification of the dataset can be added. Is the training done on the same dataset for outdoor and indoor testing, or is a different dataset used? It also can be added to prevent confusion. - The complexity analysis of the proposed solution can be added. - Authors are suggested to improve the grammar and sentence structure across the article.

·

Basic reporting

no comment

Experimental design

no comment

Validity of the findings

no comments

Additional comments

the manuscript cannot be considered for publishing at PeerJ Computer Science for several reasons such as:

the abstract need to be rewritten.

the literature review of the state of the art is very weak and did not include recent trends in Object detection.

the experimental results are not enough, special the authors mentioned in the introduction section that they will prepare an evaluation and comparative analysis of deep learning algorithms for detection part but unfortunately the proposed method is not compared with recent published methods in range 2020-2022 while only different configurations of YOLOv3 are included in EXPERIMENTAL TEST RESULTS section.

All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.