Review History


All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.

Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.

View examples of open peer review.

Summary

  • The initial submission of this article was received on March 31st, 2025 and was peer-reviewed by 2 reviewers and the Academic Editor.
  • The Academic Editor made their initial decision on July 7th, 2025.
  • The first revision was submitted on August 15th, 2025 and was reviewed by 2 reviewers and the Academic Editor.
  • The article was Accepted by the Academic Editor on September 15th, 2025.

Version 0.2 (accepted)

· · Academic Editor

Accept

The reviewers appreciated the recent changes to the article so I recommend it for acceptance.

[# PeerJ Staff Note - this decision was reviewed and approved by Vicente Alarcon-Aquino, a PeerJ Section Editor covering this Section #]

·

Basic reporting

The authors have addressed all the comments adequately.

Experimental design

-

Validity of the findings

-

Reviewer 2 ·

Basic reporting

The authors have now incorporated all the corrections.

Experimental design

-

Validity of the findings

-

Additional comments

-

Version 0.1 (original submission)

· · Academic Editor

Major Revisions

**PeerJ Staff Note:** Please ensure that all review, editorial, and staff comments are addressed in a response letter and that any edits or clarifications mentioned in the letter are also inserted into the revised manuscript where appropriate.

**Language Note:** The review process has identified that the English language must be improved. PeerJ can provide language editing services - please contact us at [email protected] for pricing (be sure to provide your manuscript number and title). Alternatively, you should make your own arrangements to improve the language quality and provide details in your response letter. – PeerJ Staff

·

Basic reporting

• Insufficient methodology details for reproducibility in the experimental design.
a. The manuscript describes the SDMDC and MCIDS-G components but lacks a specific, detailed implementation that could hinder further replication or application for future improvement. For example, on page 17, Lines 326-373, the CCTT model’s training process, such as hyperparameters, training duration, and number of epochs, is not provided.

b. In addition, on page 19, lines 384-390, some critical parameters, such as termination conditions or population size, were omitted.

Suggestion comment: The author should include a section in the methodology chapter that discusses in detail the model training, data processing method and steps, computational resources used, and specifies hyperparameters (e.g., batch size, learning rate, etc.).

Experimental design

• Limited real-world validation and dataset diversity that could affect the validity of findings
a. On page 15, lines 278-280, the manuscript claims applicability to Azure, AWS, and GCP but lacks real-world simulation or testing of this platform.

b. On page 23, lines 490-493, the evaluation relies solely on the BoT-IoT dataset, and CICIDS datasets, while established may not fully represent the heterogeneity of multi-cloud traffic.

Suggestion comment: Clarify how the extended CSE-CIC-IDS2018 dataset was adapted for SDN traffic on Page 16, line 314, and also incorporate real-world traffic traces from multi-cloud environments or additional datasets, e.g., UNSW-NB15 or NSL-KDD.

Validity of the findings

• Language and clarity issues in basic reporting.
a. The manuscript contains ambiguous phrasing and some grammatical errors, which could have an impact on readability for the audience worldwide. For example
i. “ fundamental scalability capabilities function with flexible functions” on page 15, line 32,
ii. “ security warnings from all MCIDS-G with traffic anomalies are integrated” on page 16, lines 308-309
Suggestions comment: Replace vague terms such as (flexible function) with precise and technical descriptions, e.g., (dynamic resources allocation).

• Insufficient comparison affecting the validity of findings
a. On page 9. In line 118 and page 10, line 127, the author claims a revolutionary framework, but the literature review on pages 10-13, lines 132-2,35 doesn’t adequately differentiate the proposed work from prior studies

b. On page 25. Lines 531-539, the comparative analysis includes models like CNN and AdaBoost but lacks justification for their selection or discussion of architectural similarities.

Suggested comment: Provide a table for comparing the proposed framework features, e.g., SDN integration, and DL models against previous work; this will serve as a benchmark to justify the comparator models and discuss why SDMDC outperforms the other existing models.

• Inadequate discussion of the research limitations
The conclusion on page 26, lines 575-590 briefly mentions future work but doesn’t address the framework limitation, such as false positive rates in real-time settings, the computational overhead of deep learning models, and scalability under extreme traffic loads on page 15, lines 379 – 382
Suggestion comments: The author should add a Limitation section discussing potential challenges such as (LSTM latency, CCTT’s complexity), mitigation strategies, and quantifying the computational cost.

Reviewer 2 ·

Basic reporting

-

Experimental design

-

Validity of the findings

-

Additional comments

1. Summary of the Paper
The paper presents a novel Software-Defined Networking (SDN)-enabled security framework for multi-cloud environments, integrating deep learning-based intrusion detection systems (IDS) and adaptive security policy management. The framework includes:
(i) SDMDC (Software Defined Multicloud Defense Controller) as the control plane using a Cross-Cloud Threat Transformer (CCTT).
(ii) MCIDS-G (Multicloud IDS Gateway) as the data plane using LSTM models for localized traffic monitoring.
(iii) Lemurs Optimizer for dynamic and resource-aware policy tuning.

The authors evaluate their model using the CICIDS and BoT-IoT datasets and show improved performance over state-of-the-art techniques.

Some of the points I would like to enquire about are:
1. How is data integrity ensured during inter-cloud communication in the proposed framework?
2. Why was LSTM chosen over GRU or 1D CNN for the MCIDS-G local IDS, especially given real-time demands?
3. What are the hyperparameters used in training both the LSTM and CCTT models?
4. How does the Lemurs Optimizer differ in convergence or policy adaptation from other metaheuristic algorithms like PSO or GA?
5. Were the datasets used (CICIDS, BoT-IoT) sufficient to generalize real-world multi-cloud scenarios?
6. Can the Cross-Cloud Threat Transformer (CCTT) handle zero-day attacks or only known traffic patterns?
7. Are the dataset splits (70:30) appropriate for evaluating both model training and deployment reliability?
8. Has the real-world implementation (e.g., testbed on actual multi-cloud infra) been considered or simulated?
9. Does the system maintain compliance with data sovereignty laws across jurisdictions (e.g., GDPR)?
10. What is the overhead introduced by the LSTM-based MCIDS-G on live cloud services?
11. How is east-west traffic detection synchronized between MCIDS-Gs if the attack pattern is split?

Some weakness:
The paper is dense with abbreviations (e.g., SDMDC, MCIDS-G, LO, CCTT), making readability and clarity suffer.
While attention mechanisms are explained, the implementation details, architecture depth (layers), and feature selection are not described clearly.
F1-score, Precision, Recall, and Accuracy are mentioned, but there’s little discussion of class imbalance handling or error analysis.
The choice of Lemurs Optimizer appears arbitrary without comparative evidence of its superiority over common alternatives like PSO or GA.
It's unclear how this framework interfaces or coexists with AWS GuardDuty, Azure Sentinel, etc.

All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.