All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.
Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.
Dear Author,
Your paper has been accepted for publication in PEERJ Computer Science. Thank you for your fine contribution.
[# PeerJ Staff Note - this decision was reviewed and approved by Shawn Gomez, a PeerJ Section Editor covering this Section #]
Dear Authors,
Your paper has been revised. It needs major revisions before being accepted for publication in PEERJ Computational Science. More precisely
1) You must justify the chosen hyperparameters (e.g., learning rate, batch size, optimizer selection) and explain how they impact model performance.
2) To enhance the generalizability of the proposed approach, you must consider discussing how the model might perform on real-world agricultural images with variations in lighting, background noise, and occlusions.
3) To highlight deep learning's advantages, you must include a brief comparison with traditional machine learning techniques (e.g., SVM, Random Forest).
5) To support reproducibility, you must provide the source code and trained weights. A simple diagram of the training pipeline would be helpful.
5) Some sections contain overly complex phrasing, which could be simplified for better readability. Consider revising sentences for clarity, particularly in the introduction and methodology sections. Furthermore, some figures and tables lack detailed descriptions. Consider explaining what each figure represents, particularly in performance comparison charts and segmentation visualizations.
**PeerJ Staff Note:** Please ensure that all review and editorial comments are addressed in a response letter and that any edits or clarifications mentioned in the letter are also inserted into the revised manuscript where appropriate.
**Language Note:** The review process has identified that the English language must be improved. PeerJ can provide language editing services - please contact us at [email protected] for pricing (be sure to provide your manuscript number and title). Alternatively, you should make your own arrangements to improve the language quality and provide details in your response letter. – PeerJ Staff
The manuscript is written in clear, professional English, but there are instances where sentence structures could be simplified for better readability. Some sections contain overly complex phrasing, which might make comprehension difficult for a broader audience.
The introduction provides an adequate background and motivation for the study, highlighting the significance of deep learning for plant disease classification.
The literature review is well-referenced and relevant, covering prior works and existing methodologies. However, some citations could benefit from a clearer discussion of their limitations and how the current study overcomes them.
The manuscript follows PeerJ's structural guidelines, with a logical flow from introduction to methodology, results, and discussion.
The study falls within the scope of the journal and meets high technical and ethical standards.
The methods are described in sufficient detail to allow for reproducibility, including details on dataset selection, preprocessing, model architectures, and evaluation metrics.
The manuscript discusses data preprocessing, segmentation strategies, and classification models comprehensively. However, the rationale behind selecting specific hyperparameters (such as learning rate, batch size, and optimizer choices) could be better justified.
The evaluation metrics (accuracy, precision, recall, IoU) are appropriate, and results are well presented with tables and figures.
The study successfully demonstrates the improved accuracy of the proposed approach compared to baseline models.
The results are clearly presented, and comparisons between different architectures are well-documented in tables and performance plots.
The study identifies potential limitations, such as the reliance on the PlantVillage dataset, which may not generalize well to real-world agricultural conditions.
Future work is suggested, including the extension to larger datasets with increased class diversity and improvements in computational efficiency.
Some sections contain overly complex phrasing, which could be simplified for better readability. Consider revising sentences for clarity, particularly in the introduction and methodology sections.
The manuscript does not provide sufficient justification for the chosen hyperparameters (e.g., learning rate, batch size, optimizer selection). Please explain why these specific values were selected and how they impact model performance.
The study relies on the PlantVillage dataset, which consists of controlled images. To enhance generalizability, consider discussing how the model might perform on real-world agricultural images with variations in lighting, background noise, and occlusions.
While the paper compares deep learning models, it would be beneficial to include a brief comparison with traditional machine learning techniques (e.g., SVM, Random Forest) to highlight the advantage of deep learning.
The proposed model achieves high accuracy but lacks discussion on computational efficiency. Could you provide insights on training time, model complexity, or memory requirements to assess practical feasibility?
Some figures and tables lack detailed descriptions. Consider adding explanations for what each figure represents, particularly in performance comparison charts and segmentation visualizations.
The paper does not mention whether data augmentation techniques (e.g., rotation, flipping, noise addition) were applied. If used, please provide details on their impact on model performance. If not used, consider justifying this decision.
While the paper highlights the success of the model, a discussion of its limitations and possible failure cases would be beneficial. For example, does the model struggle with certain disease types or plant species?
The manuscript presents accuracy and IoU metrics, but statistical significance tests (e.g., confidence intervals, standard deviations) are missing. Consider including these to validate the robustness of your findings.
-
-
The results are well supported with proper experimental illustrations against various diseases.
1. The proposed model is well accepted and seems to be a valid and suggested model, which is a generic one.
2. The organizational structure of the paper and illustrations is fine and good enough.
3. The basics of the paper are well presented, but reference citations are not even, as it seems to be like a cluster, and more citations are needed in order every where.
4. Motivations of the research are stronger one to accept.
5. More details could be presented on the study illustrations and dataset.
6. Summary results could be presented in the paper for easy understanding.
7. Results, illustrated in the form of tables and graphs, are good.
8. References presented are good enough to be accepted.
-
1. Provide/show the source code and trained weights to support reproducibility. A simple diagram of the training pipeline would be helpful.
2. The test set includes only 20 images, which is too small to draw meaningful conclusions. Consider using a larger or more balanced test set.
3. Include an ablation study to isolate the effect of key components like ASPP, segmentation, and hybrid model fusion.
4. The model descriptions are overly long and include redundant equations. Simplify and focus on the most relevant architectural details.
5. Important baselines like MobileNet and Vision Transformers are missing. Also, report inference time and parameter count to assess deployment feasibility.
-
1. Revise the abstract by clearly highlighting the novelty and main outcomes of the research work.
2. Revise the introduction and include key outcomes of the study at the end of the section.
3. Revise the Materials and Methods section. Incorporate relevant recent studies on leaf disease classification using deep learning
4. The dataset description is poorly written; please revise and elaborate it clearly.
5. Provide a detailed explanation of the proposed Iterative UNet architecture. Include appropriate diagrams to better illustrate the methodology.
6. The hyperparameter tuning method is not clearly explained. Expand it with detailed information and cite references where similar methods have been used.
7. Use appropriate references for all performance metrics used.
8. Revise the Results section. Compare the findings of this article with those of existing studies and present the comparison in a summary table.
9. Revise the Conclusion. Clearly explain the future directions and potential extensions of this research.
10. Numerous grammatical errors are present; revise the entire manuscript carefully.
11. A thorough proofreading is strongly recommended.
-
-
All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.