All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.
Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.
Dear Authors,
Your paper has been accepted for publication in PEERJ Computer Science. Thank you for your fine contribution.
[# PeerJ Staff Note - this decision was reviewed and approved by Sedat Akleylek, a PeerJ Section Editor covering this Section #]
I am satisfied with the revision version of this work.
I am satisfied with the revision version of this work.
I am satisfied with the revision version of this work.
I am satisfied with the revision version of this work.
The revision includes my comments. I accept the paper.
The revision includes my comments. I accept the paper.
The revision includes my comments. I accept the paper.
Dear Authors,
Your paper has been revised. It needs minor revisions before being considered for publication in PEERJ Computer Science.
More precisely:
1) You must indicate the limitation of the proposed model.
2) You must include additional numerical results demonstrating the goodness of the proposed model
The revision includes my previous comments.
I have more concerns about the paper as below:
- Did they have time processing experiments, how about the result?
- Could they show some results where their model works well and not and explain in detail?
- It’s better if they do ablation study with more combining models not only with each their component.
- What is limitation of their proposed model and why?
Dear Authors,
Your paper has been revised. Based on reviewers' concerns, it needs major revisions before being considered for publication. More precisely, the paper has not been updated based on the comments provided by reviewers.
**PeerJ Staff Note:** Please ensure that all review, editorial, and staff comments are addressed in a response letter and that any edits or clarifications mentioned in the letter are also inserted into the revised manuscript where appropriate.
The revision is better.
For my comments 1, 2, 3, 6, 7 and 8, they should include their reply on the paper.
In the revised paper, they said they added comparisons with two relevant research studies from the 2023 CVPR conference, namely AT3D and SiblingAttack, they should add more compared models in recent year.
Dear Authors,
Your paper has been revised. Given the reviewers' considerations, it needs major revisions before being considered for publication in PEERJ Computer Science.
More precisely, the following points must be faced and clarified:
1) You must clearly define in which way they have selected the Generative Adversarial Networks hyperparameters;
2) You must enhance the section conclusions of their paper because it is vague and fails to provide specific future directions that would guide subsequent research in the field. Furthermore, the comparisons conducted in this study need to be enhanced because they do not provide a comprehensive evaluation of the performance metrics that would justify the study claims.
[# PeerJ Staff Note: It is PeerJ policy that additional references suggested during the peer-review process should *only* be included if the authors are in agreement that they are relevant and useful #]
The proposed meta-analysis does not introduce new insights beyond what is already available in the existing literature, making the contribution appear redundant
The paper fails to adequately address the limitations of the existing methods
The presentation style is repetitive, and the paper does not maintain a logical flow, making it difficult for readers to follow the arguments being made
The problem statement is not clearly articulated, making it difficult to understand the research gap that the paper intends to address
The literature review does not sufficiently establish the novelty of the proposed approach compared to existing works, weakening the justification for this study. Refer more recent works such as doi.org/10.1007/s11042-023-16736-5, https://doi.org/10.1016/j.procs.2024.04.090, DOI: 10.1016/j.heliyon.2024.e37163
**PeerJ Staff Note:** It is PeerJ policy that additional references suggested during the peer-review process should only be included if the authors are in agreement that they are relevant and useful.
The figures and tables are poorly integrated, with minimal interpretation provided to explain their relevance to the paper's findings
The research questions posed in the introduction are not directly addressed in the analysis, leading to a lack of cohesion between the aims of the paper and its content
The methodology, lacks sufficient detail on the inclusion and exclusion criteria, which raises concerns about the validity of the analysis
The conclusions are vague and fail to provide specific future directions that would guide subsequent research in the field
The paper makes several unsupported claims, particularly regarding the performance of the approach, without presenting rigorous experimental or statistical analysis
The comparisons are superficial and do not provide a comprehensive evaluation of the performance metrics that would justify the claims of improvement
This paper presented an adversarial face attack method-AdvFaceGAN, based on Generative Adversarial Networks (WGAN-GP).
The authors used three losses: generator loss function, perturbation loss based on the L2 norm and visual loss based on structural similarity. They did experiments and analyzed the results.
There are some points the author should take care of as follows:
1- Why did they choose WGAN-GP framework for this problem?
2- What is their improvement of the WGAN-GP?
3- Why did they use three losses: generator loss function, perturbation loss based on the L2 norm and visual loss based on structural similarity?
4- How to evaluate each loss independently?
5- How to choose all hyperparameters in their experiments?
6- For 9 compared models, only Adv-Hat (Komkov & Petiushko 2021), and AT3D (Yang et al. 2023), how did they get the results of all compared models?
7- Why did their proposed model get the best result, could they explain in detail
8- Is there any case their proposed model doesn’t work well. If it has, could they explain in detail about this?
9- The experiments and discussion are not strong.
10- They should do more experiments, analyze the result in detail and compare with more advanced methods in recent year.
All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.