Review History


All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.

Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.

View examples of open peer review.

Summary

  • The initial submission of this article was received on May 30th, 2025 and was peer-reviewed by 2 reviewers and the Academic Editor.
  • The Academic Editor made their initial decision on July 18th, 2025.
  • The first revision was submitted on August 5th, 2025 and was reviewed by 2 reviewers and the Academic Editor.
  • A further revision was submitted on September 9th, 2025 and was reviewed by 1 reviewer and the Academic Editor.
  • The article was Accepted by the Academic Editor on October 3rd, 2025.

Version 0.3 (accepted)

· Oct 3, 2025 · Academic Editor

Accept

Thank you for your valuable contribution.

[# PeerJ Staff Note - this decision was reviewed and approved by Vicente Alarcon-Aquino, a PeerJ Section Editor covering this Section #]

Reviewer 1 ·

Basic reporting

-

Experimental design

-

Validity of the findings

The authors addressed all reviewers' remarks. Now this paper is ready for publication.

Additional comments

The authors addressed all reviewers' remarks. Now this paper is ready for publication.

Cite this review as

Version 0.2

· Sep 3, 2025 · Academic Editor

Minor Revisions

There are some minor concerns raised by Reviewer 3. These need to be addressed before acceptance.

Reviewer 1 ·

Basic reporting

no comment

Experimental design

no comment.

Validity of the findings

The authors addressed all my remarks. Now this paper is ready for publication.

Additional comments

The authors addressed all my remarks. Now this paper is ready for publication.

Cite this review as

Reviewer 3 ·

Basic reporting

The paper is well organized and well written, an easy to follow throughout. The literature references for the given scope are enough and well structured. The relevant results and hypotheses are presented in a clear manner too. The authors did a great job in surveying the major existing adversarial attack algorithms with some experimental results to do a comparative study. However, there are few recommendations that will make work clearer and well directed.
- Attribute equation1 to FGSM
- The scope of the work looked too broad for me as you discussed about adversarial evasion, poisoning, inference, extraction attacks, LLM vulnerabilities, among which all of them were not necessary within the scope of this work.
-The work also misses the pipeline for the whitebox and blackbox implementations and the hyperparamters used in the experimentation for different types of attacks

Experimental design

The research question definitions and the methodology is complete and surely serves the current knowledge domain. The experiments look complete from the defined scope, though it misses to show the exact implementation pipeline of whitebox and blackbox attacks.

Validity of the findings

The work is sound in terms of benefit to literature though might be limited on the extent of contributions as the models used, dataset and adversarial techniques are very common ones and used explicitly for almost every adversarial experiment, consider more breadth would have surely added more significance to this work. The conclusion is stated well and linked to the original research question.

Cite this review as

Version 0.1 (original submission)

· Jul 18, 2025 · Academic Editor

Major Revisions

The authors conducted comparative studies of four popular attack methods (FGSM, PGD, DeepFool, and Carlini & Wagner) on five different neural network models (FCNN, LeNet, Simple CNN, MobileNetV2, and VGG11), using three datasets: MNIST, Fashion-MNIST, and CIFAR-10. The EVAISION tool, developed by the authors, enables a consistent and multi-faceted analysis of the effectiveness of attacks. Please follow the reviewers' requests and suggestions strictly.

**PeerJ Staff Note:** Please ensure that all review, editorial, and staff comments are addressed in a response letter and that any edits or clarifications mentioned in the letter are also inserted into the revised manuscript where appropriate.

Reviewer 1 ·

Basic reporting

The article is written in clear and professional English, and its structure is consistent with the requirements of scientific publications. The introduction places the research in the context of current threats related to the security of AI and Machine Learning systems. The authors cite important literature (Goodfellow, Carlini, Papernot, Szegedy, etc.), although the bibliography could be supplemented with newer studies from the last year.

Figures and tables are clear, well-described, and correctly referenced in the text. Data access is described, and all metrics used in the analysis are precisely defined.

Experimental design

The authors conducted comparative studies for four popular attack methods (FGSM, PGD, DeepFool, Carlini & Wagner) on five different neural network models (FCNN, LeNet, Simple CNN, MobileNetV2, VGG11), using three datasets: MNIST, Fashion-MNIST, and CIFAR-10. The EVAISION tool, developed by the authors, enables a consistent and multi-faceted analysis of the effectiveness of attacks.

The research project was well designed, and the choice of metrics (Accuracy, F1 Score, Precision, Recall, Mean Confidence, Misclassification Rate) was justified. Repeated experiments (five runs) were used, which increases the credibility of the results.

Validity of the findings

The results are presented reliably and interpreted in a logical way. The authors point to the varied susceptibility of models to attacks - it is particularly interesting to note that simpler models (e.g., LeNet) in some cases show greater resistance than more complex ones (VGG11, MobileNetV2). This is a valuable observation that can help design resilient models in resource-constrained conditions.

The results are consistent with the literature, and the use of many different metrics allows for a more comprehensive understanding of the impact of attacks. EVAISION, as a tool, seems useful and has the potential to be implemented in test environments.

Additional comments

Strengths:
- Well-structured and written text, rich in technical details.
- New tool (EVAISION) enabling reproducible testing of model robustness.
- Clear comparisons and strong justification for model selection and attacks.
- Robust experiments with many metrics, clearly presented results.

Suggestions for improvement:
- Expand the section related to defense against attacks. The work mainly focuses on attacks; it is worth at least briefly comparing the effectiveness of popular defense methods (e.g., adversarial training, defensive distillation).
- Supplement the latest literature from 2023-2025, especially in the context of trends in defense and attacks on LLMs and models sensitive to prompt injection.
- Consider making the EVAISION code available - the work does not provide a repository or information about public access, which limits the replicability of the research.

Cite this review as

·

Basic reporting

The writing and presentation in the manuscript are adequate. Literature references are standard. They set the context for the reader. However, there is no particular focus or emphasis on the survey of the state-of-the-art. There are no tables providing a taxonomy of the techniques and tools. Figures are very basic and do not motivate the research topic. Raw data is publicly available. The manuscript is self-contained but lacks novelty. The hypotheses tested are well known. The scope of the review is not clearly demarcated in comparison to the existing reviews in the published literature. The performance evaluation metrics are clearly defined. But an algorithm analysis is not undertaken. There are no theorems and detailed proofs.

Experimental design

The research scope is aligned with the Literature Review Articles of the journal. The research question is addressed meaningfully. But the knowledge gaps that the research fills are not stated clearly. A performance evaluation of various techniques and deep neural networks is conducted. The research can be replicated. But the underlying methodology is not investigated rigorously. The technical standard of the investigation can be improved significantly by including the mathematical and computational modelling details.

Validity of the findings

The rationale and improvement of the current study over the existing literature are not specified. The benefit of the study is ambiguous. No taxonomies of the evaluated tools, techniques, tactics, and strategies are given. Evaluation is not conducted on real-world datasets. Comparison with the state-of-the-art is not discussed. Robustness of the evaluation is not discussed. Statistical significance of the evaluation is superficial in that several models are matched with several attack techniques. Cross-referencing is missing across sections of the paper. Conclusions are expressed in terms of performance evaluations of deep learning models. But the originality and significance of such conclusions are questionable. For example, recent developments on game-theoretical adversarial learning algorithms are not included.

Additional comments

Please make the review focused on one or more of the applications, systems, algorithms, or theories. Scope the proposed literature review and experimental review in comparison to the existing survey papers. Several performance evaluations of the deep learning models already exist in the literature. Compare your findings with the existing results.

Cite this review as

All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.