All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.
Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.
Dear authors, we are pleased to verify that you meet the reviewer's valuable feedback to improve your research.
Thank you for considering PeerJ Computer Science and submitting your work.
[# PeerJ Staff Note - this decision was reviewed and approved by Xiangjie Kong, a PeerJ Section Editor covering this Section #]
good
good
good
The authors seem to not responding to the previous comments. This time reference to dataset was given in literature review section. No comparisons are made. Organizational structure remains the same as in the the first submission.
No comparisons are made.
Major concerns
The paper presents a comparative study aimed at evaluating the efficacy of two deep learning models, Faster R-CNN and YOLOv8, in maritime surveillance. The significance of the research, well-backed by a robust literature review, sets a solid groundwork for understanding the advancements in real-time object detection technologies in challenging marine environments.
The paper describes a comparative analysis between two deep learning models—Faster R-CNN and YOLOv8—for detecting fishing vessels and fish in maritime surveillance scenarios. The study evaluates these models based on metrics like accuracy, execution speed, and robustness to environmental conditions.
The study's findings are supported by a methodical approach and rigorous metrics
Dear authors,
You are advised to critically respond to all comments point by point when preparing a new version of the manuscript and while preparing for the rebuttal letter.
Reviewer #2 states that does not address the concerns shown in the first round of review. Please address all the comments/suggestions provided by the reviewers.
**PeerJ Staff Note:** Please ensure that all review and editorial comments are addressed in a response letter and that any edits or clarifications mentioned in the letter are also inserted into the revised manuscript where appropriate. It is a common mistake to address reviewer questions in the response letter but not in the revised manuscript. If a reviewer raised a question then your readers will probably have the same question so you should ensure that the manuscript can stand alone without the response letter. Directions on how to prepare a response letter can be found at: https://peerj.com/benefits/academic-rebuttal-letters/.
Kind regards,
PCoelho
No Comment
No Comment
No Comment
No Comment
The revision does not address the concerns showed in the first round of review. The dataset is shared to the reviewer in a private google drive repository. It is recommended that the dataset and the sources are shared via some permanent repositories. Also, the manuscript does not contain any link or revised description on the dataset. The comments on the restructuring of the relevant section is not done. There are number of publicly available datasets like DeepFish, OzFish, etc and on which sota methods have been tried. The paper should make comparisons on those datasets and with those methods. Many of the figures, including the confusion matrices are unreadable. Contributions are limited and not clearly mentioned.
Many of the important works are missing. The dataset description is missing.
The figures are not clear. Also comparisons are not made to sota methods.
Dear authors,
You are advised to critically respond to all comments point by point when preparing a new version of the manuscript and while preparing for the rebuttal letter. Please address all the comments/suggestions provided by the reviewers.
Reviewer 1 has suggested that you cite specific references. You are welcome to add it/them if you believe they are relevant. However, you are not required to include these citations, and if you do not include them, this will not influence my decision.
Reviewer 2 has pointed to the need for more manuscript structural and investigation design
Kind regards,
PCoelho
**PeerJ Staff Note:** It is PeerJ policy that additional references suggested during the peer-review process should only be included if the authors agree that they are relevant and useful.
**PeerJ Staff Note:** Please ensure that all review and editorial comments are addressed in a response letter and that any edits or clarifications mentioned in the letter are also inserted into the revised manuscript where appropriate.
**Language Note:** The review process has identified that the English language must be improved. PeerJ can provide language editing services - please contact us at [email protected] for pricing (be sure to provide your manuscript number and title). Alternatively, you should make your own arrangements to improve the language quality and provide details in your response letter. – PeerJ Staff
The abstract should summarize the research, highlighting the focus on comparing the performance of Faster R-CNN and YOLOv8 in real-time detection of fishing vessels and fishes. Emphasize the significance of the study, its potential impact on fisheries monitoring, and the contribution to the field of object detection using deep learning.
Clearly state the research objectives, specifically comparing the performance of Faster R-CNN and YOLOv8. Highlight the relevance of real-time detection in fisheries management and the challenges addressed by the study.
Conduct a comprehensive literature review to establish the current state-of-the-art in object detection, particularly in the context of fisheries monitoring. Discuss existing methods, challenges, and limitations in real-time detection of fishing vessels and fishes.
The following article can be included in the discussion
Statistical analysis of design aspects of various YOLO-based deep learning models for object detection.
Identify gaps in the literature that the current research aims to fill and emphasize the uniqueness of comparing Faster R-CNN and YOLOv8.
**PeerJ Staff Note:** It is PeerJ policy that additional references suggested during the peer-review process should only be included if the authors are in agreement that they are relevant and useful.
Explain the selection criteria for the models, the dataset used (including its source and characteristics), and the evaluation metrics employed.
Address the strengths and weaknesses of Faster R-CNN and YOLOv8 in real-time detection, considering factors such as accuracy, processing speed, and scalability.
Present the results of the comparative analysis between Faster R-CNN and YOLOv8 in detecting fishing vessels and fishes in real-time. Provide quantitative data on key performance metrics, including accuracy, precision, recall, and processing speed.
Summarize the key findings and contributions of the research. Emphasize the significance of comparing Faster R-CNN and YOLOv8 in real-time fisheries monitoring. Discuss the practical implications of the results and offer recommendations for future research, such as exploring additional models, improving dataset diversity, or integrating other sensors for enhanced detection.
The manuscript could be improved with language editing (grammar, technical writing and proofing). The introduction is very brief. It should contain a general background followed by a critic of the literature and the contributions made in this manuscript. The datasets, which are of specific interest is not clearly mentioned if collected by them to taken from others, a loose reference is given [24] which is a review paper. The datasets are not shared either.
Only mAP is used. There are other metrics that are widely used in the literature are not being employed here. Also it is not clear if the two models were trained for the data, or it is just application of the models. Why only two models are selected?
The discussion is missing. Why one method is working better should have been explained.
All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.