All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.
Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.
Dear authors, we are pleased to verify that you meet the reviewer's valuable feedback to improve your research.
Thank you for considering PeerJ Computer Science and submitting your work.
Kind regards
PCoelho
[# PeerJ Staff Note - this decision was reviewed and approved by Sedat Akleylek, a PeerJ Section Editor covering this Section #]
The authors have addressed all my comments.
-
-
**PeerJ Staff Note:** Please ensure that all review, editorial, and staff comments are addressed in a response letter and that any edits or clarifications mentioned in the letter are also inserted into the revised manuscript where appropriate.
**Language Note:** When you prepare your next revision, please either (i) have a colleague who is proficient in English and familiar with the subject matter review your manuscript, or (ii) contact a professional editing service to review your manuscript. PeerJ can provide language editing services - you can contact us at [email protected] for pricing (be sure to provide your manuscript number and title). – PeerJ Staff
Language and Clarity:
The manuscript is generally well-written in professional and accessible English. However, certain sentences (e.g., lines 23, 77, 121, 128) could be revised for improved clarity and fluency. Minor language polishing would enhance readability.
Introduction and Literature Context:
The introduction successfully frames the study within the broader field of IoT security and federated learning. That said, a clearer articulation of the specific knowledge gap being addressed would strengthen the justification of the research.
Figures, Tables, and Supplementary Data:
All figures and tables are relevant, clearly labeled, and integrated effectively into the discussion. The raw data are provided, but supplemental files would benefit from more descriptive metadata to improve accessibility and reuse.
Originality and Relevance:
This is an original research paper that fits well within the scope of the journal, tackling a meaningful problem: securing federated learning in IoT environments against poisoning attacks.
Rigor and Ethical Standards:
The research design is technically rigorous and follows high ethical standards, with appropriate measures for data handling and experimental control.
Methodological Detail:
The methodology, which includes convolutional neural networks and a novel adaptive trust mechanism (AGAT-FL), is described with sufficient detail to enable replication. The adaptive trust scoring mechanism adds novelty and practicality to the system.
Data Integrity and Robustness:
The underlying data are comprehensive and well-documented. The statistical analysis appears sound and appropriately controlled.
Interpretation and Scope of Conclusions:
Conclusions are logically derived from the results and stay within the scope of the findings. The authors correctly avoid overgeneralization.
Model Performance:
The AGAT-FL model demonstrates improved performance compared to baseline approaches, with gains clearly shown via metrics such as accuracy, precision, recall, and F1-score. The approach is validated across multiple datasets.
Security Impact:
The adaptive trust mechanism is shown to be effective in mitigating poisoning attacks, adding to the robustness and real-world relevance of the system.
Statistical Reporting:
Some parts of the data analysis could be improved with more detailed statistical reporting (e.g., confidence intervals, p-values) to enhance interpretability and rigor.
Introduction Refinement:
Expanding the introduction to include a more detailed background and clearer identification of the knowledge gap would help better frame the contribution.
Language Refinement:
Minor revisions in sentence construction and terminology usage throughout the manuscript would improve clarity and impact.
Ethical Compliance:
Ethical and confidentiality standards are fully respected, with no evidence of unauthorized content or data sharing.
1. English required minor revision
2. At the end of the abstract, add numerical results of AGAT-FL (obtained accuracy, precision, recall, and F1-score of AGAT-FL )
3. In the introduction section, at least define the poisoning attacks.
4. Add references to Figure 2 if it isn’t yours
5. The literature review is comprehensive and includes recent works (2023–2025), covering federated learning, poisoning attacks, and IoT intrusion detection. To improve the literature review section, add a comparison table of the previous studies
6. In the proposed methodology section, add a flowchart to show the proposed methodology stage.
7. XAI components (SHAP, LIME) are briefly discussed, but no interpretability results
8. Include sensitivity analysis on the number of poisoned nodes, τ thresholds, and λ for anomaly detection
1. Aim and Objectives – check spelling Aim not Aimd
2. Many sentences are too long and contain multiple ideas jammed together. This makes them hard to parse. Example "Poisoning attacks often exploit temporal inconsistencies in network traffic, leading to deceptive model updates."
3. Several terms and concepts (e.g., "trust-weighted aggregation", "adversarial updates") are repeated without adding new meaning.
4. There are several formatting issues due to missing whitespaces or improper line merging. This affects readability.
5. Align the abstract, objectives, methodology, and conclusion in tone and claims
6. Avoid repeating the same FL vulnerability explanation across sections unless you’re adding new technical insight.
1. How does AGAT-FL address common challenges in graph learning for FL, such as communication overhead, synchronization, and sparse connectivity (i.e., when malicious nodes attempt to isolate themselves)?
2. Where and how are SHAP/LIME results integrated into the pipeline? Do you provide specific case studies showing how these tools helped interpret misclassifications or verify poisoning detection logic?
3. Contrastive learning requires careful construction of "positive" and "negative" pairs. In your IoT IDS setting, how are these instance pairs defined? E.g., are benign vs. malicious traffic windows contrasted, or is the contrast temporal?
4. What exactly do you mean by "multi-modal feature fusion"? Are you referring to combining temporal, spatial, and trust-related features? If so, what fusion technique is used (concatenation, attention, weighted averaging)?
5. How is meta-learning applied within AGAT-FL? Is it used for hyperparameter tuning, model initialization, or rapid adaptation to adversarial shifts? More precision in explaining this term would strengthen technical clarity.
6. Are the normalization statistics (mean and standard deviation) computed per node (federated) or globally? This can affect privacy, performance, and data leakage risk in FL settings. Clarification helps gauge adherence to FL constraints.
7. Add explanation for attack intensity (e.g., epsilon in FGSM, perturbation budgets) and how model robustness is validated under varying attack levels.
8. Define All Symbols in the Algorithms: Even for a technical audience, explicitly defining terms improves readability.
9. Algorithm 1: Could you elaborate on using FedAvg or a trust-weighted aggregation based on GAT outputs? Also, regarding Client Selection, are clients randomly selected per round or fixed? Is participation synchronous or asynchronous?
10. Show a clearer pseudocode flow for anomaly rejection and fallback action (e.g., skip aggregation, replace with last good update). Consider naming the full Mahalanobis formula more cleanly.
11. Algorithm 2: Trust-Based Edge Weighting: How is the trust scoring function T(ni) defined and computed? Edge/Adjacency Matrix Construction: Is the adjacency matrix A symmetric (i.e., A(i, j) = A(j, i)) and does this reflect true bi-directional trust, especially in cases where communication may be asymmetric?
1. How do trust-aware methods (e.g., AdaptFL or pFedHR) compare more closely to AGAT-FL than older baselines like FedAvg? Could this suggest that some of AGAT-FL’s gains are rooted in shared principles with these methods? If so, what makes AGAT-FL more effective?
2. What is the computational or communication overhead of AGAT-FL relative to these baselines? High accuracy is valuable, but only if the model is still efficient and scalable in IoT contexts.
3. Your results show accuracy improvements above 94% versus baselines below 89%. Can the performance gains be directly attributed to one component (e.g., trust-based aggregation) or a synergy of all three (GAT, CNN-GRU, Mahalanobis filtering)?
4. Consider including ablation study results to quantify each module’s individual and combined impact. Further, given that AGAT-FL integrates three major components (GAT-based aggregation, hybrid deep models, and anomaly filtering), you have to conduct a modular ablation analysis to demonstrate the necessity of each part.
Consider a Related Worktable that maps methods to authors and highlights their contribution or difference from AGAT-FL.
All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.