Review History


All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.

Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.

View examples of open peer review.

Summary

  • The initial submission of this article was received on March 4th, 2025 and was peer-reviewed by 3 reviewers and the Academic Editor.
  • The Academic Editor made their initial decision on June 17th, 2025.
  • The first revision was submitted on August 22nd, 2025 and was reviewed by 1 reviewer and the Academic Editor.
  • A further revision was submitted on October 30th, 2025 and was reviewed by 1 reviewer and the Academic Editor.
  • The article was Accepted by the Academic Editor on December 1st, 2025.

Version 0.3 (accepted)

· · Academic Editor

Accept

This manuscript has undergone three rounds of revision, and the authors have satisfactorily addressed all reviewer comments. Based on the reviewers assessments and my own evaluation, I find no statistical or methodological issues that require further modification. The manuscript is suitable for publication.

[# PeerJ Staff Note - this decision was reviewed and approved by Jyotismita Chaki, a PeerJ Section Editor covering this Section #]

Reviewer 3 ·

Basic reporting

The authors have further improved clarity, coherence, and structure throughout the manuscript. Their response demonstrates that all requested refinements like ensuring consistency across terminology, improving figure explanations, and enhancing readability have been implemented.

The narrative is now polished, and the article maintains a high standard of academic writing.

Figures, tables, and raw data are presented in a professional manner and adequately support the text.

I find the changes implemented satisfactory.

Experimental design

I had raised some concerns previously regarding methodological. Authors have resolved the concerns. Specifically, the authors have:

Unified the RSSI preprocessing and AP selection criteria.

Added a rigorous ablation analysis comparing preprocessing strategies (raw vs. normalized vs. β = e power transform).

Provided detailed explanations for missing-value handling and distribution shifts and this directly addresses prior ambiguity.

Expanded the H-RNN architecture description, including sampling cadence, embedding dimensions, recurrent layer types, hidden units, and loss-weighting rationale.

Included clearer descriptions of federated learning configuration and client-server training flow.

Clarified the IoT routing graph formulation, dynamic edge weighting, and uncertainty handling via snap-to-graph logic.

These additions significantly strengthen the methodological rigor and ensure replicability.

The revised version now provides the details expected of primary research in this area in my opinion.

Validity of the findings

The manuscript now presents results with internal consistency. The authors have:

Corrected all previously inconsistent error metrics.

Unified evaluations using RMSE, MAE, and percentile distance errors.

Added Table 4 with per-floor and overall performance, including 50th, 75th, and 95th percentile errors.

Replaced ambiguous percentage-based “accuracy” with interpretable distance-based performance metrics.

Clarified how regression metrics align with the described use-case.

Provided confidence-interval information and expanded discussion on variability between different building geometries.

These revisions greatly enhance the scientific validity and interpretability of the results.

The conclusions are now firmly grounded in the reported findings and remain well aligned with the research questions and stated objectives.

Additional comments

The revised manuscript demonstrates substantial improvement and now reflects the high quality expected for publication.

The authors’ responses explicitly address every point raised in prior reviews, and the incorporated changes are appropriate, technically sound, and clearly implemented in the manuscript text.

In light of the thorough revisions and clear, satisfactory responses, I am OK to accept the manuscript in its current form.

Version 0.2

· · Academic Editor

Minor Revisions

**PeerJ Staff Note:** Please ensure that all review, editorial, and staff comments are addressed in a response letter and that any edits or clarifications mentioned in the letter are also inserted into the revised manuscript where appropriate.

Reviewer 3 ·

Basic reporting

The manuscript is largely clear with a professional structure (Abstract → Introduction → Related Studies → Methodology → Results → Discussion/Future Work → Conclusion). Figures and tables support the narrative (LiDAR vs. QGIS discrepancy; training curves; FL client configs).

The authors have implemented the requested changes, and I am satisfied with the basic reporting.

Experimental design

The research question & gap are well framed. Authors make an attempt to integrate accurate 3D modeling (QGIS) + privacy-preserving FL with an H-RNN + IoT routing for multi-story indoor navigation, addressing privacy/scalability/adaptability gaps in prior systems. The three-phase structure (modeling → FL positioning → IoT routing) is coherent.

Review feedback:

1. You state AP inclusion ≥98% (lime 298), drop to 206 APs, min RSSI −105 dBm, with power transform β=e (and elsewhere defaulting missing to −100 dBm). Unify these numbers and justify design choices; show an ablation (no transform vs transform, different missing-value policies). Provide full distribution plots pre-/post-transform.

2. The H-RNN Model architecture still lacks tangible details. A detailed explanation details: input window length, sampling cadence, embedding dims, number/type of recurrent layers (LSTM/GRU), hidden units, multi-task heads (building/floor classifiers), loss weights (γb, γf), and rationale. Include a model diagram with tensor shapes.

3. IoT routing. Dijkstra implementation is fine, but specify graph construction (nodes/edges per corridor/door, inter-floor transitions), dynamic edge weights (congestion/closures), and how positioning uncertainty propagates to routing (e.g., snap-to-graph with uncertainty radii).

Validity of the findings

Overall, validity needs additional refinement. Some of the internal issues to fix:

1. Results cite RMSE 0.36 m (with 95% CI ±0.05 m), but Discussion cites 0.22–0.36 m, and later another “average error percentage of 0.21 and RMSE of 0.173 m” appears—this mixes units/definitions and contradicts earlier values. Can we provide one table of record with per-floor and overall RMSE/MAE, plus confidence intervals and the exact computation method (bootstrap vs analytic)?

2. “Over 99% positioning accuracy.” Define the target: building-level? Floor-level? Point-level within X meters? If classification, report per-class precision/recall/F1 and macro/micro. If regression, avoid “accuracy%” altogether; use distance errors (median/mean RMSE, 50/75/95th percentiles).

Version 0.1 (original submission)

· · Academic Editor

Major Revisions

**PeerJ Staff Note:** Please ensure that all review and editorial comments are addressed in a response letter and that any edits or clarifications mentioned in the letter are also inserted into the revised manuscript where appropriate.

**Language Note:** The review process has identified that the English language must be improved. PeerJ can provide language editing services - please contact us at [email protected] for pricing (be sure to provide your manuscript number and title). Alternatively, you should make your own arrangements to improve the language quality and provide details in your response letter. – PeerJ Staff

·

Basic reporting

Clarity and Language:

The manuscript is generally well-structured and written in clear, professional English. However, some sections (e.g., the Introduction and Methodology) contain repetitive phrases (e.g., "three-dimensional indoor navigation" repeated multiple times). Streamlining these sections would enhance readability.

Minor grammatical errors exist (e.g., "LiDAR readings vary in different locations because of the false ceiling. However, the QGIS readings remain similar, since the altitude of the ceiling level of each floor is expected to be the same at various locations. A thorough proofreading is recommended.

Figures and Tables:

Figures 1–6:

Figure 1 lacks clear labels for components like "Central Learning Component" and "Real Time Calibration." Adding annotations or a legend would improve clarity.

Figures 3 and 4(b) have inconsistent axis labels (e.g., "Dis similarities" in Figure 3 should be "Discrepancies").

Figure 6’s subplots are overcrowded, making it difficult to distinguish trends across clients. Consider splitting into separate figures or simplifying the visualization.

Tables:

Table 1 and Table 3 use inconsistent units (e.g., "0.35" vs "0.35 m"). Ensure all measurements include units.

Table 2’s title ("Parameters for training and validation") is generic. Specify that these parameters relate to federated learning training.

References:

The literature review adequately covers federated learning and IoT but lacks recent works on fingerprint-based deep learning for indoor positioning.

Experimental design

Methodology:

The three-phase approach (3D modeling, FL-based positioning, IoT integration) is well-designed. However, the description of the H-RNN architecture (Section 3.4) is vague. Provide a diagram or pseudocode to clarify the hierarchical structure and task weighting mechanism.

The LiDAR data collection process (Phase 1) needs more detail: How were environmental variables (e.g., lighting, obstructions) controlled during scanning?

Reproducibility:

While the Python packages (TensorFlow, Keras) are listed, critical hyperparameters (e.g., learning rate, optimizer settings) for the RNN and FL training are omitted. Include these to ensure replicability.

Validity of the findings

Results:

The reported accuracies (99% for positioning and 98.7% for routing) are impressive. However, the evaluation is limited to controlled environments (university buildings). Testing in dynamic, real-world settings (e.g., crowded malls) would strengthen validity.

The RMSE of 0.22–0.36 meters is commendable but should be contextualized against state-of-the-art benchmarks (e.g., compare with Shahbazian et al., 2023 or the suggested citation above).

Statistical Robustness:

The manuscript does not report confidence intervals or p-values for accuracy/RMSE metrics. Adding statistical significance tests would reinforce the claims.

Additional comments

Strengths:
Novel integration of federated learning with 3D modeling and IoT for indoor navigation.
Comprehensive evaluation across multiple building structures.

Weaknesses:
Limited discussion of scalability challenges (e.g., computational overhead for large-scale deployments).
Figures require refinement for clarity and consistency.

Suggestions for Improvement:
Revise figures to ensure labels, legends, and units are consistent and legible.
Expand the literature review to include fingerprint-based deep learning approaches.
Provide hyperparameters and environmental controls for reproducibility.
Discuss limitations (e.g., reliance on Wi-Fi RSSI signals, which can be unstable in dense environments).

·

Basic reporting

Raw data is not sufficiently discussed.
Background literature is covered but needs to be more current.
English is mostly professional but needs copyediting.

Experimental design

Reproducibility is limited due to missing implementation/configuration details. Consider adding that.
Needs to perform ablation and comparative studies.

Validity of the findings

Conclusions are supported by results.
Statistical tests are missing - this is needed.
Unresolved issues are acknowledged.

Additional comments

This paper addresses a timely and practical problem, enhancing indoor navigation accuracy while preserving user privacy through federated learning and IoT integration. Its use of QGIS, LiDAR, H-RNN, and FL represents a meaningful fusion of technologies with potential for real-world applications in smart buildings, emergency services, and accessibility.

Thank you for this article.

Refer detailed peer review comments below:

Abstract:
1. Abstract lacks a lacks a clear problem statement. Ex. Consider adding a sentence like: "Current indoor navigation systems lack scalability, privacy, and adaptability across multi-story buildings."
2. The error percentage is almost nonexistent…” – support this with numerical quantified value, this seems misleading now here in current form.
3. Clearly state what makes your work novel compared to previous studies. Ex. is this the first system where you are combining multiple techniques with so and so accuracy etc.
4. You mention “over 99% accuracy” is repeated in multiple ways. Condense this to one sentence with all performance results.

Introduction:
1. What is the motivation behind this study ex. consider adding a paragraph explaining the societal or industrial need for precise indoor navigation systems in hospitals, airports, etc.
2. Group related concepts together and end with a clear transition to your solution. At present, the Introduction jumps across multiple concepts ex. BIM, VR, LiDAR too quickly making it hard to follow.
3. You have to explain abbreviations like QGIS and FL before using them repeatedly.
4. 2016 GIS visualization work may not be adequate for supporting 2025-level advancements. Consider referring recent research.
5. You did not highlight the key deficiencies in existing work and how your work fills this gap.

Related Work:
1. Can you include recent benchmarks or surveys (2022–2025) on FL for indoor localization.
2. You may need to table form comparison across features like scalability, privacy, and accuracy would make the literature comparison much clearer.
3. Several in-text citations like “(Xu et al., 2020)” and “(Zhan et al., 2022)” appear too densely packed. Can you please space them out?
4. Can you explicitly list three major limitations of the most relevant prior systems—such as failure in large structures, privacy risks, or poor model generalization.
5. Also add a paragraph summarizing how your system overcomes each of these limitations.

Methodology:
1. Can you present key parts of the Federated Learning process and H-RNN little bit better?
2. Mathematical symbols in especially Equations (1)-(4) are poorly formatted or undefined. Can you define each symbol when first introduced.
3. You have to expand preprocessing steps - explain how missing RSSI values were handled and why a power transformation was necessary.
4. How does hierarchy in H-RNN impact this : clarify how the hierarchical layers correspond to floor/building structure and how weights were assigned to different tasks.

Results:
1. You need to add significance testing to validate differences between client numbers or configurations. Add confidence intervals or p-values are reported.
2. Can you clarity experimental setup: define how users moved through the building during tests ex. paths, start/end points, speed variation, etc.
3. How does variability impact results? Ex. discuss how performance varies between floors or between different building geometries beyond average RMSE.
4. Did you do ablation study. Can you include performance when FL or H-RNN is removed. What is the accuracy without FL? This strengthens the argument for using all components.

Conclusion:
1. You may need to refactor this section - Focus instead on key outcomes, practical value, and challenges. The first three paragraphs are redundant with the abstract.
2. Add specific future directions ex. improvements like training on larger datasets, adding semantic indoor maps, or handling signal interference.
3. What is real world application of this - Could this system be applied in a hospital or airport? Can you mention specific plans or challenges for deployment.
4. What is the broader impact of your work ex. highlight how your framework could enable smart buildings, personalized navigation, and safer evacuation routes.

Reviewer 3 ·

Basic reporting

The manuscript presents a multidisciplinary approach integrating 3D modeling, federated learning (FL), and IoT-enabled devices for indoor navigation. While the topic is timely and potentially impactful, the current draft suffers from several fundamental issues in clarity and reporting:

Many descriptions are vague or overly general. For example, the phrase “enhance space usage and general user experience” lacks concrete meaning and is not supported by measurable outcomes.

Critical terms and tools (e.g., QGIS, RSSI, TFmini LiDAR sensor, UART) are introduced without sufficient explanation. The manuscript assumes prior knowledge that many readers may not possess, which undermines accessibility.

The authors mention “Phases 1, 2, and 3,” but begin directly with Phase 3 in the abstract, before explaining what the earlier phases entail. This confuses the flow and should be reorganized.

Although several technologies are named (e.g., QGIS, Python, PostGIS, edge devices), the paper lacks a consolidated system architecture diagram or detailed textual breakdown of how these components interact.

Experimental design

The experimental framework as described is underdeveloped and lacks the detail required for replication or rigorous peer evaluation:

The section describing the use of QGIS for 3D modeling is vague and does not outline the specific data sources, modeling steps, or validation procedures. It makes broad claims about accuracy but omits the step-by-step process of how models were constructed or evaluated.

The integration of federated learning and RNNs is introduced generically, with no specifics about the training architecture, input features, model parameters, or communication flow.

The sentence “Only model updates are transmitted to a remote server” lacks clarity:what model updates? Are these weight deltas? Gradients? No technical explanation is provided.

The use of differential privacy is mentioned, but no implementation details or algorithms are given. The reference to “an algorithm that could achieve the best convergence speed” is speculative and unsupported.

No baseline comparison is made against traditional indoor localization methods (e.g., centralized models, KNN, SVM, non-hierarchical RNNs), which weakens the study’s ability to substantiate its claimed improvements.

Validity of the findings

The validity of the results is difficult to assess due to major omissions:

The placement and configuration of access points (APs) used for RSSI measurement are not described. The paper mentions "strategically positioned" APs but does not specify how many were used, where they were located, or how signal data was processed.

Although the authors report >99% accuracy and low RMSE (0.36m), the underlying raw datasets are not provided in the supplemental materials (it does provide the code), violating PeerJ’s policy on data transparency. Without access to LiDAR scans, RSSI measurements, or model evaluation logs, the study cannot be validated or reproduced.

Descriptions of the TFmini LiDAR sensor and its integration via UART are confusing. It’s unclear what hardware setup was used, how the scanning was conducted (height, movement pattern, duration), or how this data was input to QGIS.

The FL training procedure, described as involving multiple clients and communication rounds, lacks the specifics needed to evaluate its efficiency, reliability, or convergence stability.

All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.