All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.
Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.
Dear authors, we are pleased to verify that you meet the reviewer's valuable feedback to improve your research.
Thank you for considering PeerJ Computer Science and submitting your work.
Kind regards
PCoelho
[# PeerJ Staff Note - this decision was reviewed and approved by Xiangjie Kong, a PeerJ Section Editor covering this Section #]
-
-
-
The authors have addressed all reviewer comments and benchmarked the proposed model against current state-of-the-art (SOTA) models. The manuscript quality has been significantly improved.
**PeerJ Staff Note:** Please ensure that all review and editorial comments are addressed in a response letter and that any edits or clarifications mentioned in the letter are also inserted into the revised manuscript where appropriate.
**Language Note:** The review process has identified that the English language must be improved. PeerJ can provide language editing services - please contact us at [email protected] for pricing (be sure to provide your manuscript number and title). Alternatively, you should make your own arrangements to improve the language quality and provide details in your response letter. – PeerJ Staff
Paper format and picture quality needs to be improves.
Insufficient dataset size (5,857 images) for reliable deep learning evaluation. Missing industry-standard baselines (EfficientDet, YOLOv8, MobileNet variants) and lack of standard benchmark validation (COCO, Open Images) severely limits comparative assessment.
Single-run results without statistical validation undermine findings reliability. No confidence intervals, significance testing, or variance reporting provided. Cross-domain generalization untested beyond limited apple dataset (2,000 images).
#Addressing Novelty Concerns: While the combination of existing techniques may appear incremental, the authors should better articulate the principled rationale behind their architectural choices. The integration of EMA attention with ShuffleNetV2's channel shuffle operations creates unique feature interaction patterns that warrant theoretical analysis. Consider adding ablation studies showing why this specific combination outperforms alternative attention mechanisms and providing theoretical justification for the design decisions.
#Strengthening Experimental Framework: The dataset limitations are concerning for robust evaluation. The authors should either: (1) expand evaluation to include larger, established agricultural datasets like PlantVillage or custom-collected datasets with >20K samples, or (2) clearly acknowledge dataset constraints and provide cross-domain validation on COCO or Open Images to demonstrate generalizability. Include comprehensive baselines covering EfficientDet-D0/D1, YOLOv8n/s, MobileNetV3-based detectors, and recent agricultural detection methods.
#Statistical Rigor: Results presentation needs substantial improvement. Report mean ± standard deviation across multiple runs (minimum 5), include confidence intervals, and perform statistical significance testing. The current single-run evaluation undermines result credibility. Expand ablation studies to systematically evaluate each component's contribution with proper statistical analysis.
#Efficiency Analysis: The "lightweight" claims require proper validation. Include comprehensive efficiency metrics: FLOPs, inference latency on target hardware (mobile devices, edge devices), memory consumption, and energy usage. The 34.7% parameter reduction with 2% mAP gain needs context - compare efficiency frontiers against other methods and justify whether this trade-off represents meaningful advancement.
#Technical Clarity: Restructure the methodology section for clarity. Provide precise mathematical formulations for the EMA-ShuffleNet integration, include detailed architectural diagrams, and specify all hyperparameters. Address the inconsistent notation in equations 1-15 and improve figure quality with higher resolution and clearer annotations.
#Additional Recommendations: Consider adding deployment validation on actual agricultural monitoring systems, include failure case analysis, and discuss practical implementation constraints. The work would benefit from collaboration with agricultural domain experts to validate real-world applicability.
Paper lacks literature reviews of the experimental background of YOLO ana latest versions experiment
need to fix first Paper lacks literature reviews of the experimental background of YOLO ana latest versions experiment
need to fix first Paper lacks literature reviews of the experimental background of YOLO ana latest versions experiment
need to fix first Paper lacks literature reviews of the experimental background of YOLO ana latest versions experiment
The manuscript ‘PGLD-YOLO: A lightweight algorithm for pomegranate fruit localisation and recognition’ addresses the issues of low detection accuracy, high parameter counts, and high computational complexity in existing object recognition algorithms for pomegranate fruit localisation and recognition. It proposes a lightweight detection algorithm for pomegranate fruit based on an improved version of YOLOv10s. Experiments were conducted on a publicly available pomegranate fruit dataset, and the algorithm was generalised on an apple detection dataset, demonstrating that the model achieves a balance between detection accuracy, localisation precision, and lightweight design. The manuscript has practical significance and theoretical value in terms of its topic selection, is clearly written, and generally meets academic English standards. The research motivation is clearly stated, the methods section is detailed and thorough, and the proposed method is thoroughly validated through extensive experiments, demonstrating solid and comprehensive research work. The experimental process is rigorous, and the data is well-organised. However, there are still some issues that require further improvement:
1.It is recommended that the section ‘Algorithms for Fruit Recognition based on Deep Learning’ in the manuscript be supplemented with references to relevant literature on fruit recognition algorithms published in 2025 to enhance the timeliness and comprehensiveness of the literature review.
2.There are font inconsistencies in the section titled ‘EMA mechanism’ of the manuscript. For example, the font used for ‘xi’ in lines 391-393 does not match the font used in equations (3) and (4). Similar issues are also present in lines 399-400 and line 403. Please make consistent corrections to improve the readability of the manuscript.
3.It is recommended that the content of the ‘Conclusions’ section of the manuscript be streamlined and reorganised to improve its logical rigour and conciseness.
4.It is recommended that this manuscript include a discussion of the limitations of the proposed method and a description of future directions for development.
5.In Figure 2 of the manuscript, the ‘C2f_LEMA’ module is incorrectly labelled as ‘C2f_FEMA’. Please make the necessary corrections.
6.Some of the figures in the manuscript have layout and clarity issues. For example, Figures 1, 2, 14, and 15 are not very clear, and Figure 11 doesn't match the format of the others. Please adjust and revise them so they're easier to read.
no comment
no comment
All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.