All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.
Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.
In the opinions of original reviewers and mine, this revised paper is able to accept for publication.
[# PeerJ Staff Note - this decision was reviewed and approved by Yilun Shang, a PeerJ Section Editor covering this Section #]
All of the questions and remarks have been answered in detail and with clarity by the authors. Their justifications successfully address the issues brought up, and the edits improve the manuscript's quality and clarity. I have no more concerns since I am satisfied with their responses and the changes that have been implemented.
OK
OK
No
No comment
No comment.
No comment.
The authors have addressed all my comments; so, no further comments.
N/A
N/A
N/A
N/A
In the opinions of reviewers and mine, this paper should undertake a major revision.
**PeerJ Staff Note:** Please ensure that all review and editorial comments are addressed in a response letter and that any edits or clarifications mentioned in the letter are also inserted into the revised manuscript where appropriate.
**Language Note:** The review process has identified that the English language must be improved. PeerJ can provide language editing services - please contact us at [email protected] for pricing (be sure to provide your manuscript number and title). Alternatively, you should make your own arrangements to improve the language quality and provide details in your response letter. – PeerJ Staff
The manuscript is generally well-written and uses professional, clear English throughout.
The introduction provides sufficient background and context, outlining the challenges in mobile edge computing (MEC) and motivating the need for the proposed KECO method.
The structure conforms to PeerJ standards.
The study presents an approach by combining federated learning with knowledge distillation in a teacher-student framework.
The methodology is described with sufficient detail, including the architecture of the KECO system, the training process, and the deployment of student models.
The experimental setup is rigorous, with clear descriptions of datasets, evaluation metrics, and baseline methods for comparison.
The findings are statistically sound, with appropriate use of metrics and comparisons to state-of-the-art baselines.
The paper is generally well-organized and accessible to readers with a background in distributed systems and machine learning.
Suggestions:
1. There are minor typographical errors (e.g., "offloadinng" instead of "offloading") that should be corrected throughout the manuscript.
2. Some figure captions could be more descriptive.
3. There is need for a more detailed description of the hyperparameter selection process and the rationale behind certain design choices (e.g., model architectures, aggregation strategies).
4. Clarify the scalability aspects-how does KECO perform as the number of edge devices increases?
5. The discussion could be expanded to address the generalizability of the approach to other distributed computing environments beyond MEC.
Thanks so much for your manuscript submission to Peer Journal. This paper proposed presents a model to KECO: Toward efficient task offloading in mobile edge computing with federated knowledge distillation. It is readable and containing some useful contributions to this journal. There are a few majors aspects of potential improvements which demands further updates (but not limited to) as I specified as below:
1. While the abstract has aimed to provide a comprehensive overview of the main contribution, there is a need to be revised so that the general reader can grasp the main idea/topic of the draft and the main contribution.
2. The authors should include a subsection in Introduction to discuss the main motivation, since it is crucial to highlight why this research was undertaken — what specific gap, problem, or need it addresses. A strong motivation section will help readers better understand the significance and relevance of the study.
3. In the related work section, a comparative analysis table can be included.
4. The authors tried to explain their results which is good but what I suggest the authors to follow this method: first show the results in form of figure or table, explain the behavior’s of the results, explaining why and how this behavior’s happen and lastly which most importantly to justify their result by comparing with existing journal articles.
5. The authors, please correct grammatical errors and also maintain orientation throughout the paper. The manuscript has so much grammatical and orientation relate issues, it shows the authors have not proper read it before submission. The authors please properly arranged the manuscript in the correct way.
6. The training details are not clear, such as how the learning rate is set, the size of the batch, which optimizer is used, and the proportion of the training set, validation set, and test set.
7. It is recommended to use cross-validation during training.
8. What is next? What are future directions? Authors should include some future work of their model.
NA
NA
NA
N/A
N/A
N/A
This article proposes KECO, a federated knowledge distillation-based method for task offloading in mobile edge computing (MEC), aiming to optimize energy efficiency, reduce computational latency, and address data silos while preserving privacy. While the work demonstrates potential in balancing energy consumption and task reliability, the comments below may help the authors improve the article:
1. The term “data island” is nonstandard and should be replaced with “data silos” to align with established federated learning literature.
2. The teacher-student architecture is ambiguously described, with no details on the teacher model’s structure, training data, or how its “complex parameter angle” differs from the student model.
3. The claimed novelty of combining federated learning with knowledge distillation is insufficiently differentiated from prior FedKD frameworks (e.g., FedGKD, FedX), which also use soft/hard labels and client-server logit aggregation.
4. The authors may consider citing "A hierarchical federated learning model with adaptive model parameter aggregation" to strengthen the discussion on federated learning frameworks. This work provides insights into adaptive aggregation strategies for heterogeneous edge devices, which could contextualize how KECO’s teacher-student parameter updates align with or diverge from hierarchical aggregation approaches, particularly in balancing global model consistency and local adaptability.
5. The additive homomorphic encryption scheme is described in isolation, with no discussion of its integration into the federated aggregation process or computational overhead.
6. The experimental setup omits details on the dataset(s) used for training/evaluation, including task types, data distributions, and heterogeneity levels across clients.
7. Incorporating "A local cost simulation-based algorithm to solve distributed constraint optimization problems" could enhance the methodological rigor of the task offloading optimization formulation. This article offers a systematic approach to modeling distributed constraints and resource allocation trade-offs, which would complement the energy consumption minimization problem described in Section 3 and provide a comparative baseline for evaluating KECO’s efficiency.
8. The term “task offloadinng” appears with inconsistent spelling (e.g., double “n”), requiring editorial correction throughout the manuscript.
9. The claim that KECO reduces task offloading failures lacks analysis of real-world factors like network congestion or hardware faults, relying solely on simulated CPU/memory stress.
10. The manuscript would benefit from referencing "NonPC: Non-parametric clustering algorithm with adaptive noise detecting" in the context of dynamic node failure simulations (Section 6.4). This work introduces adaptive noise detection in distributed systems, which aligns with the authors’ experiments on CPU/memory stress-induced failures and could provide methodological parallels for improving robustness analyses in MEC task offloading.
11. The energy consumption comparison in Section 6.2 does not contextualize results against theoretical lower bounds or state-of-the-art benchmarks.
12. The conclusion overstates applicability to “large-scale task processing” without quantifying scalability limits (e.g., client/node counts, task granularity).
13. The manuscript does not disclose limitations of homomorphic encryption, such as computational latency or compatibility with deep learning operations, which could affect deployment feasibility.
All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.