All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.
Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.
Congratulations on your valuable contribution!
[# PeerJ Staff Note - this decision was reviewed and approved by Jyotismita Chaki, a PeerJ Section Editor covering this Section #]
paper has been modified and improved.
No more comments.
No more comments.
No more comments.
All comments are in the last section.
All comments are in the last section.
All comments are in the last section.
Thanks for the revision. The final version of the paper, together with the responses to my comments and the corresponding revisions, is generally satisfactory.
I recommend that the paper undergo a significant revision. There are numerous inconsistencies. Mandatory requests!
**PeerJ Staff Note**: Please ensure that all review, editorial, and staff comments are addressed in a response letter and that any edits or clarifications mentioned in the letter are also inserted into the revised manuscript where appropriate.
**PeerJ Staff Note**: It is PeerJ policy that additional references suggested during the peer-review process should only be included if the authors agree that they are relevant and useful.
The manuscript requires revisions, as it must be improved in terms of clarity, figure and form, and literature completeness to meet academic standards.
The manuscript requires major revisions to incorporate stronger statistical validation and to address potential overfitting and generalizability concerns through rigorous cross-validation and broader dataset evaluation.
The results are promising but require revisions to include more rigorous evaluation metrics and a thorough discussion of the study’s limitations.
1. Explain all abbreviations upon first mention.
2. Clearly state in the introduction what the proposed approach achieves that others do not.
3. Compare and contrast HEMF with other fusion models such as CoAtNet, TransUNet, and MedFormer.
4. Retain the thematic organization of the related work section, but expand it with deeper analysis.
5. Provide a complete block diagram or flowchart of the algorithm.
6. Clearly distinguish between training-time and inference-time behavior.
7. Justify the combination of SA, CA, and MHEA with theoretical rationale or prior work.
8. Explain the data flow within the fusion module.
9. Strengthen the theoretical foundation of the attention and fusion strategies.
10. Add bar charts and ROC-AUC curves with confidence intervals.
11. Include comparisons of training time, convergence time, and inference time.
12. Provide sample heatmaps or prediction visualizations to demonstrate attention/fusion effectiveness.
13. Read this article if applicable: https://journal.qubahan.com/index.php/qaj/article/view/686
**PeerJ Staff Note:** It is PeerJ policy that additional references suggested during the peer-review process should only be included if the authors are in agreement that they are relevant and useful.
14. Explicitly discuss the weaknesses and limitations of the proposed approach.
-
-
-
1. While the proposed framework demonstrates promising results across multiple datasets, the manuscript must more clearly articulate the novelty of HEMF in comparison to existing multi-attention or hierarchical fusion approaches. The claim of achieving state-of-the-art performance should be substantiated with direct benchmarking against recent peer-reviewed models.
2. The introduction is generally well-structured; however, the articulation of the research gap could be more sharply defined. It is recommended to explicitly state what specific limitations in prior transformer-based medical image classification methods HEMF addresses.
3. The related work section is basically examined from three perspectives, and the methods in the literature are examined based on convolutional neural networks, machine learning, and transformers. This section requires significant expansion. A comprehensive literature review table comparing key methods, datasets, architectures, and performance metrics should be included. This will contextualize the contribution of HEMF and highlight its relative advantages.
4. The modular breakdown of HEMF is appreciated; however, the manuscript should include architectural diagrams and flowcharts to improve clarity. Additionally, the novelty of each module should be explicitly compared to similar components in existing models.
5. The use of multiple datasets is commendable. However, the manuscript should include a justification for the selection of each dataset and discuss how dataset diversity impacts generalizability.
6. This is a critical omission: A detailed parameter table must be included, specifying all training configurations (e.g., optimizer type, learning rate, batch size, number of epochs, regularization techniques). This is essential for reproducibility.
7. The evaluation section should be expanded to include additional metrics such as Cohen’s Kappa and MCC, which are particularly important in imbalanced classification tasks. These metrics will provide a more robust assessment of model performance.
As a result, although the hierarchical enhanced multi-attention feature fusion proposed in the scope of the study has a certain originality, the sections listed above should be taken into consideration.
All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.