All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.
Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.
The paper can be accepted. Congratulations.
[# PeerJ Staff Note - this decision was reviewed and approved by Miriam Leeser, a PeerJ Section Editor covering this Section #]
All the reviewers' comments have been addressed carefully and sufficiently. The revisions are rational from my point of view. I think the current version of the paper can be accepted.
No comment.
No comment.
No comment.
The authors readdressed all issues.
The authors readdressed all issues.
The authors readdressed all issues.
**PeerJ Staff Note:** Please ensure that all review, editorial, and staff comments are addressed in a response letter and that any edits or clarifications mentioned in the letter are also inserted into the revised manuscript where appropriate.
The paper doesn't propose novel method, similar methods have been published before and the authors didn't discuss them, such as:
-Al-Mahturi, A., Santoso, F., Garratt, M. A., & Anavatti, S. G. (2021). Self-learning in aerial robotics using type-2 fuzzy systems: Case study in hovering quadrotor flight control. IEEE access, 9, 119520-119532.
Good
Good
N/A
The paper introduces an adaptive rule reduction strategy to improve computational efficiency. However, I miss here the exact criteria for identifying and eliminating "repeating rules". This part remained unclear.
My comments are as follows:
- The authors should elaborate on the specific metrics or thresholds used to determine rule redundancy
- A more detailed explanation of how the compatibility measure in Type-2 fuzzy sets influences rule reduction would strengthen the methodological transparency.
My comments are as follows:
- Discuss how this model differs from or improves upon other rule reduction techniques in fuzzy systems, such as rule merging etc. A comparative analysis of computational overhead before and after rule reduction could further highlight the efficiency gains.
- The performance of the participatory learning and kernel recursive least squares components likely depends on hyperparameter selection. Could you provide an analysis of how sensitive the model is to key hyperparameters, such as learning rates or kernel parameters? This would help readers understand the robustness of the approach in different settings.
My comments are as follows:
- The experiments focus on chaotic time series and stock indices, but the broader applicability of the method is only briefly mentioned. The authors should discuss potential challenges or adaptations needed to deploy the model in other fast-paced environments, such as industrial IoT or real-time decision systems?
- The literature review should be extended to adequately cover Type-1 and Type-2 fuzzy systems. It should also include recent advancements in hybrid neuro-fuzzy systems and deep learning-based uncertainty quantification methods. I suggest authors to read the below interesting papers: ** for neuro fuzzy models (Yazdi and Komasi. (2024). Best Practice Performance of COVID-19 in America continent with Artificial Intelligence. Spectrum of Operational Research) **for ML techniques (Younas et al (2024). A Framework for Extensive Content-Based Image Retrieval System Incorporating Relevance Feedback and Query Suggestion. Spectrum of Operational Research) ** fuzzy logic systems (Bosna, J. (2025). Examining regional economic differences in Europe: The power of ANFIS analysis. Journal of Decision Analytics and Intelligent Computing)
Incorporating a discussion on how these approaches compare to your method in terms of uncertainty handling and rule complexity would provide a more comprehensive background and highlight the novelty of your contribution.
The manuscript has numerous grammatical errors, awkward phrases, and some unclear or colloquial expressions.
Example: “our system archives dynamic updates” should be “achieves dynamic updates.”
“So long as the app goes on living as the app” is unclear and non-academic.
Sentence structures need tightening to maintain clarity and professionalism throughout.
Although the text references multiple figures (e.g., Figure 3, Figure 7) and tables (e.g., Table 10, Table 14), they are not included in the submission file.
This violates basic reporting standards and makes it difficult to verify the claims.
The manuscript does not mention how data were divided (e.g., training/test split, cross-validation).
There is no mention of preprocessing steps (normalisation, lag creation, etc.), which are vital in time series forecasting.
There is no clear description of the fuzzy system's hyperparameters or comparative baselines like LSTM or ARIMA. Please add a table to compare them.
Tables and figures mentioned in the text (e.g., Table 9, Figure 4) are not included.
No confidence intervals, p-values, or variance/error bars are provided for the reported performance metrics.
All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.