Review History


All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.

Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.

View examples of open peer review.

Summary

  • The initial submission of this article was received on November 3rd, 2021 and was peer-reviewed by 3 reviewers and the Academic Editor.
  • The Academic Editor made their initial decision on December 6th, 2021.
  • The first revision was submitted on January 11th, 2022 and was reviewed by 1 reviewer and the Academic Editor.
  • The article was Accepted by the Academic Editor on January 21st, 2022.

Version 0.2 (accepted)

· Jan 21, 2022 · Academic Editor

Accept

The paper can be accepted. Congratulations.

Reviewer 2 ·

Basic reporting

I believe the authors have improved the paper based on the previous reviewer comments.

Experimental design

The authors have justified the use of VGG16. They even added other traditional classifiers and another deep learning classifier, such as Inception v3. Cross-validation has been evident as well in this revised paper.

The justification of the authors to not use data augmentation is rather acceptable.

Validity of the findings

The findings have been properly validated and benchmarked. The revised paper looks better than the original.

Additional comments

The previous issues raised have been addressed by the authors.

Version 0.1 (original submission)

· Dec 6, 2021 · Academic Editor

Major Revisions

We have obtained mixed review reports for the paper. It seems reasonable to offer a chance of revision to address the comments. Please provide a detailed response letter. Thanks.

[# PeerJ Staff Note: Please ensure that all review comments are addressed in a rebuttal letter and any edits or clarifications mentioned in the letter are also inserted into the revised manuscript where appropriate. It is a common mistake to address reviewer questions in the rebuttal letter but not in the revised manuscript. If a reviewer raised a question then your readers will probably have the same question so you should ensure that the manuscript can stand alone without the rebuttal letter. Directions on how to prepare a rebuttal letter can be found at: https://peerj.com/benefits/academic-rebuttal-letters/ #]

Reviewer 1 ·

Basic reporting

The presentation quality of this paper is good. The background, the task, and the model are introduced clearly. However, the technical contribution of this paper is less. It seems that this paper just simply applies the VGG-19 model for solving the Zophobas Morio and Tenebrio Molitor classification problem.

Experimental design

1. It seems that the authors do not perform baselines on the dataset and provide comparisons. Specifically, what is the performance of non-transfer-learning models on the dataset? What is the superiority of the VGG-19 model compared with other transfer learning approaches on the given dataset?

2. Since the dataset is very small, the authors are suggested to use cross-validation to avoid the impact of sampling.

Validity of the findings

The novelty of this paper has not been assessed. The dataset of this paper has not been provided.

Reviewer 2 ·

Basic reporting

This is a good technical paper.
Figures are clear and have at least 300 dpi.

On the related works, the authors should move Table 2 to the Discussions section as the benchmarking between their proposed methods and other papers.

Table 2 should discuss the summary of the work, methods, strengths, and weaknesses of other papers.

Experimental design

The use of VGG-19 should be justified.
The dataset is appropriate. No data augmentation performed?

Validity of the findings

The original Table 2 could be moved here to improve proposed method validation.

·

Basic reporting

1. The English article is written quite clear and professionally.
2. The background and context are sufficient for this topic.
3. The article structure is quite reasonable
4. Relevant results to the hypothesis are quite self-contained.
5. Formal results are clear and have detailed proofs.

Experimental design

1. This article is within the aims and scope of the journal.
2. The research questions are well defined and fill the specific problems of difficulties to diffenetiate these two worms.
3. This investigation was performed to a high technical and ethical standard.
4. The methods are described with sufficient detail and information to replicate.

Validity of the findings

1. This article gives impact and novelty for worms recognition algorithm.
2. All underlying data have been provided and robust, statistically sound, and controlled.
3. Conclusions are well stated, linked to original research questions and limited to supporting results.

Additional comments

1. In general, we are not confused by distinguishing these two worms in my lab.
2. The app will help beginners to differentiate these two worms.

All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.