Review History


All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.

Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.

View examples of open peer review.

Summary

  • The initial submission of this article was received on April 22nd, 2025 and was peer-reviewed by 3 reviewers and the Academic Editor.
  • The Academic Editor made their initial decision on June 11th, 2025.
  • The first revision was submitted on September 3rd, 2025 and was reviewed by 2 reviewers and the Academic Editor.
  • A further revision was submitted on September 22nd, 2025 and was reviewed by 2 reviewers and the Academic Editor.
  • A further revision was submitted on October 13th, 2025 and was reviewed by 1 reviewer and the Academic Editor.
  • The article was Accepted by the Academic Editor on October 14th, 2025.

Version 0.4 (accepted)

· · Academic Editor

Accept

Dear Authors,

Thank you for addressing the reviewers'comments. Your manuscript now seems sufficiently improved and ready for publication.

Best wishes,

[# PeerJ Staff Note - this decision was reviewed and approved by Xiangjie Kong, a PeerJ Section Editor covering this Section #]

Reviewer 3 ·

Basic reporting

The author has improved the manuscript in this revised version. He has addressed all the major comments and concerns raised in the previous review, providing appropriate clarifications and corrections. The technical content, structure, and language quality have been improved, and the responses to reviewers are detailed and satisfactory.

Experimental design

no comment.

Validity of the findings

no comment.

Version 0.3

· · Academic Editor

Minor Revisions

Dear Authors,

One reviewer thinks that your manuscript requires minor revision. We encourage you to address the concerns and criticisms of Reviewer 2 and resubmit your paper once you have updated it accordingly.

Best wishes,

Reviewer 1 ·

Basic reporting

Need to see that the articles should be cited in nthe ascending order.

Experimental design

1. Need to show how ACO is inspired by the food search behavior of antsand how it was designed to solve computational problems.

Validity of the findings

1. The proposed work was to compare with the exisitng ones in the fomr of the table at the end off the results section with citing the latest articels.

Reviewer 3 ·

Basic reporting

I have carefully reviewed the revised version of the manuscript. The authors have made significant improvements compared to the previous submission and have addressed some of the reviewers’ questions and concerns. Nevertheless, there remain some issues that should be considered in the next revision to further improve the manuscript and ensure that it meets the journal’s standards. My specific comments are provided below.
- The statement in the abstract and elsewhere: “outperforming other state-of-the-art classifiers such as deep learning, decision trees, k-NN, and logistic regression” is too general and potentially misleading. Deep learning is not a single classifier to be compared but a broad family of models (e.g., CNNs, RNNs, …). To make it clearer the author should specify which classifier based deep learning. Otherwise, this statement is not technically accurate.
- Introduction (lines 54-56), the manuscript states: “On the other hands, PROAFTN (Preference Ranking Organization METHod for Enrichment Evaluation – Approximate Fuzzy TOPSIS for Numerical data) AlObeidat2011.” This description and citation are problematic for several reasons:
• Incorrect acronym expansion: The phrase “Preference Ranking Organization METHod for Enrichment Evaluation” actually corresponds to PROMETHEE, which is a different MCDA method used for choice, sorting, and ranking problems. It is not related to the acronym of PROAFTN.
• Incorrect citation: The citation format is wrong. Furthermore, the reference to Al-Obeidat (2011) does not introduce the PROAFTN method itself. Rather, it concerns later work on learning procedures for PROAFTN as stated in the paper elsewhere. The original development of PROAFTN is by Belacel (1999, 2000), and these should be cited when introducing the method. I recommend the authors revise the sentence to correctly describe PROAFTN, and update the reference citations accordingly (e.g., [7, 8] should refer to the original works on PROAFTN).
- Introduction (lines 60–61): The manuscript states: “However, PROAFTN requires the prior determination of multiple parameters, such as interval boundaries, weights, and preference thresholds.” My understanding is that PROAFTN, as a nominal sorting/classification method based on feature interval learning, only requires the weights to be specified a priori. The interval boundaries are typically determined during the learning phase using a discretization approach. Hence, the role of ACO could be better emphasized as a way to enhance the interval learning process through metaheuristic search rather than relying solely on discretization heuristics as in the case of PROAFTN.
- Please correct the references citation to the following statement in line 69: “While some prior studies have explored metaheuristic algorithms—such as Genetic Algorithms (GA)\cite{references} and Particle Swarm Optimization (PSO)—for optimizing PROAFTN \cite{references}… “.

Experimental design

- In the section:” Fuzzy Membership Function and Rule Derivation in PROAFTN”, Page 14, Lines 537–540:
The manuscript states: “In the PROAFTN classifier, fuzzy membership functions are defined for each attribute based on class-specific intervals. These functions are modeled using triangular shapes, where each membership function is characterized by a five-parameter tuple: the lower bound (a), lower support (b), center or peak (c), upper support (d), and upper bound (e)”. This description is incorrect and should be revised. In PROAFTN, fuzzy membership functions are modeled as trapezoidal fuzzy intervals, not triangular ones. A trapezoidal fuzzy number is characterized by four parameters — commonly denoted (a, b, c, d) — where:
a = lower bound (left foot),
b = lower support (left shoulder),
c = upper support (right shoulder),
d = upper bound (right foot).
A triangular fuzzy number is a special case of the trapezoid and is represented by three parameters (a, m, d) where m is the peak; equivalently it is the trapezoid case with b = c. Therefore, there is no separate “center or peak” parameter in the general PROAFTN fuzzy-interval representation.
Please do the following:
1. Revise the textual description to state that PROAFTN uses trapezoidal fuzzy intervals and replace the incorrect five-parameter tuple with the correct four-parameter notation (or three-parameter if triangularity is intentionally assumed).
2. Correct all formulae, definitions, and figures that currently use the five-parameter representation.
3. If the authors do intend to use triangular membership functions in this study, explicitly state that assumption, explain how it is imposed (i.e., set b = c), and update all related equations and notation accordingly.
- Section “Description of the Algorithm,” Lines 607–608: The manuscript still refers to “setting up the Genetic Algorithm”, even though a GA is not part of the proposed approach and the author previously acknowledged in his answer to reviewer this was an error. This reference must be removed from the text to avoid confusion and ensure consistency in the description of your method.

Validity of the findings

no comments.

Version 0.2

· · Academic Editor

Minor Revisions

Dear Authors,

One of the previous reviewers did not respond to the invitation for reviewing the revised paper. Although one reviewer accepts your paper, one reviewer suggests minor revision. We encourage you to address the concerns and criticisms of Reviewer 1 and resubmit your paper once you have updated it accordingly.

Best wishes,

Reviewer 1 ·

Basic reporting

1. The writers have changed the article as per the suggestions made. Still, some of the values that were generated were not clearly explained.
2. The writers have not explained clearly about how the ACO serves as an inductive learning mechanism that automatically infers the key fuzzy parameters.
3. Observed that sentences with the same meaning were repeated. Recheck the entire article.
4. Need to list out the gaps that were identified by reading the existing works.
5. The novel objectives of the works should be clearly defined.

Experimental design

1. If the writers show the workflow of the proposed work, then the article sounds good.
2. How suitable the PROAFTN method is for this work. Because it is used in the image processing and classification.
3. What are the intervals considered for figure 1?
4. The equations are well defined, but the parameters considered to evaluate those equations were not explained clearly.
5. What is the importance of avoiding extreme cases in evaluating specific measures or quantities with a crisp interval?
6. How can the writers prove that the optimistic assessment can narrow the gap?
7. Need a citation for how Figure 2 was used as a benchmark.

Validity of the findings

1. What the writers want to prove with the table 6.
2. What are the PROAFTN parameters proved to be a successful approach to optimizing PROAFTN’s training and thus significantly improve its performance?
3. Why have the writers compared their proposed work with the baseline models? There are so many novel and hybrid models used by the researchers in the latest articles. So why don't the writers compare with those existing ones?
4. The writers have to define the statement of how far the outcome was achieved by using the proposed model by considering the novel parameters.

Additional comments

The writers have answered to the queries raised by the reviewers but need to incorporate them into the article too. Here some points were suggested to improve the quality of the article. See that most of the sentences with the same meaning were repeated; remove that type of sentence.

Reviewer 2 ·

Basic reporting

The authors have made substantial improvements in response to reviewer comments, including updation in abstract, updation in fuzzy membership, inclusion of flowchart ant colony optimization incorporation of additional evaluation metrics, clearer methodological explanations.
Litrature is updated in present version.

Experimental design

Research questions are meaningful and rigorous experimentation is conducted.

Validity of the findings

All underlying data have been provided; they are robust, statistically sound, & controlled.

Version 0.1 (original submission)

· · Academic Editor

Major Revisions

Dear authors,

Thank you for submitting your article. Feedback from the reviewers is now available. It is not recommended that your article be published in its current format. However, we strongly recommend that you address the issues raised by the reviewers, especially those related to readability, experimental design and validity, and resubmit your paper after making the necessary changes. Before submitting the paper following should also be addressed:

1. The Abstract does not clearly present the creation or usage of dataset.

2. The problem statement is not clear in the Introduction section. Although the paper presents analysis of general classification, this section does not clearly include a problem definition, motivation, overview of the proposed solution, enumeration of the real contributions. Please write research gap and the motivation of the study. Evaluate how your study is different from others. Please highlight the originality, novelty, and advantages of the proposed method. More recent literature should be examined. There are many metaheuristic based classification methods.

3. All of the values for the parameters of the algorithms selected for comparison should be provided.

4. Please pay special attention to the usage of abbreviations.

5. In the conclusions, please state explicitly what lessons can be learnt from this study and then describe in more detail the future research directions. Generability, adaptability, and sustainability of the proposed method should be provided. The method is only used for diabetes dataset.

6. Reviewers 2 and 3 have asked you to provide specific references. You are welcome to add them if you think they are useful and relevant. However, you are not obliged to include these citations, and if you do not, it will not affect my decision.

Best wishes,

**PeerJ Staff Note:** It is PeerJ policy that additional references suggested during the peer-review process should only be included if the authors agree that they are relevant and useful.

**PeerJ Staff Note:** Please ensure that all review, editorial, and staff comments are addressed in a response letter and that any edits or clarifications mentioned in the letter are also inserted into the revised manuscript where appropriate.

Reviewer 1 ·

Basic reporting

The article sounds good. But the content is written in a general way. To reach an objective, they have to consider some features. But it was not described properly.

Experimental design

Here, the proposed models defined were novel, but it was written in a general way. How they were implemented was not written. It needs to consider what characteristics should be considered to reach the objectives.

Validity of the findings

OK. Compared with the basic models. There are so many articles that were written by researchers using novel models. Here they have to compare them. Visualisation is weak. Need to improve the article as per the suggestions made.

Additional comments

1. The paragraphs in the introduction section need connectivity between them. The articles must be cited in ascending order.

2. How membership functions should be calculated. How can we derive the fuzzy rules from membership functions?

3. List out the fuzzy rules that are being considered. How will you implement defuzzification based on the problem statement?

4. List of combinations needed as per the membership functions on the basis of membership rules.

5. Need to implement PROAFTN. This section explains how the pessimistic and optimistic intervals were evaluated. PROAFTN uses fuzzy intervals to maintain a balance between pessimistic and optimistic extremes.

6. The Equation was defined, but the parameters were not defined to calculate the computation of the fuzzy indifference relation. Equations were not cited, and the usage of the equations was not defined clearly.

7. Estimation of Membership Degree and Classification of an Object was written in a general way. Need to explain a brief description of how they reached the targeted variable and what characteristics were considered.

8. What is the purpose of developing metaheuristic models to enhance PROAFTN learning?

9. An architecture, along with step-by-step algorithm details and explanations, is needed for the proposed model.

10. Here in the entire article, what parameters were considered to enhance the diabetes detection that were not declared? General content is more, but what were the necessary actions taken to detect the problem that was not clear?

11. Need to compare the results with the obtained results and the existing results in the form of a table by citing the latest references.

12. The methods used in the article were very, very basic. There are so many articles written using the Novel/Hybrid/Genetic models. Why have the writers chosen these basic models only?

13. Table 6 parameters were not understandable. How the behaviour of the property is evaluated.

14. The Results section has to improve a lot with the visualization. It would be beneficial to include graphs to illustrate the results obtained.

15. The Conclusion section should be reduced. Unnecessary data can be deleted.

Reviewer 2 ·

Basic reporting

The manuscript is generally well-written with professional and scientific English.

The introduction provides a clear context for the study and outlines the motivation and objective effectively.

Figures and equations are used appropriately to explain the PROAFTN methodology and ACO integration.

Minor language edits are required to improve clarity in a few sections (e.g., repetition in the abstract, such as “This study explores…” twice).
The paper is lengthy and would benefit from condensing repetitive descriptions (e.g., explanations of ACO appear in multiple places in similar form).
The manuscript would benefit from visual clarity improvements in figures (ensure readable labels and high resolution).

Consider the following recent similar articles:
https://www.sciencedirect.com/science/article/pii/S1877050925014693
https://www.researchgate.net/profile/Shruti-Garg-5/publication/389713127_Exploring_the_Potential_of_Royal_Animal_Optimization_Algorithms_for_Diabetes_prediction_in_Indian_population_data/links/67cf8031bab3d32d8440982a/Exploring-the-Potential-of-Royal-Animal-Optimization-Algorithms-for-Diabetes-prediction-in-Indian-population-data.pdf
https://link.springer.com/article/10.1007/s13755-023-00242-x

**PeerJ Staff Note:** The PeerJ's policy is that any additional references suggested during peer review should only be included if the authors find them relevant and useful.

Experimental design

The research is relevant and addresses a significant challenge in diabetes classification using fuzzy MCDA and bio-inspired optimization.

PROAnt is a novel methodology combining PROAFTN with ACO for enhanced learning from data.

The mathematical formulations are rigorous, and implementation details are well-documented.

Sufficient details are provided to allow reproducibility.

Suggestions for Improvement:
1. While the ACO methodology is described in detail, a concise summary or diagram of the training-testing loop would help improve understanding.
2. Add a clearer distinction between the baseline PROAFTN and the proposed PROAnt throughout the experiments.
3. Parameter sensitivity analysis is missing; a brief evaluation of how ACO hyperparameters (e.g., α, β, ρ) affect classification accuracy would be insightful.

Validity of the findings

A large dataset (100,000 samples) is used, which enhances the statistical reliability of results. However, the dataset description and preprocessing need more transparency; details such as missing value treatment, normalization, class balance, etc., are not clearly discussed.

The paper provides a comparative study against standard classifiers (Decision Tree, Logistic Regression, k-NN, Deep Learning), with superior performance from PROAnt. The claim of "state-of-the-art" performance should be substantiated with comparisons to more recent deep learning approaches, not just traditional classifiers.

Additional comments

Novelty & Contribution: The integration of ACO with PROAFTN for parameter optimization in a fuzzy MCDA framework is novel. The contribution is significant in terms of methodological design and potential applicability in healthcare diagnostics.

Impact: The proposed methodology has potential application in broader medical classification tasks and other MCDA-based domains.
Clarity: The manuscript can be improved by trimming redundant explanations.

Reviewer 3 ·

Basic reporting

The manuscript is generally well-written and uses professional English. However, there are occasional grammatical issues or awkward phrases that may benefit from light editing for clarity and flow. The technical terminology is mostly used correctly, but a proofreading pass is recommended.

On page 2, Line 51, the manuscript has mixed languages by using the German word "Gleichezeitig" instead of the English "At the same time", which affects clarity and professionalism. All text should be in clear, grammatically correct English. A thorough proofreading pass is recommended to correct such issues.

In line 66, the word categorize should be replaced by classify.
The manuscript lacks a sufficiently developed related work section, particularly regarding the application of Metaheuristics to enhance the PROAFTN classifier.

There is limited discussion of existing studies that have explored the integration of metaheuristics algorithms, such as PSO, for the PROAFTN classification method. For example,
- Al-Obeidat, Feras, et al. "An evolutionary framework using particle swarm optimization for classification method PROAFTN." Applied Soft Computing 11.8 (2011): 4971-4980.
- Al-Obeidat, Feras, et al. "Differential evolution for learning the classification method PROAFTN." Knowledge-Based Systems 23.5 (2010): 418-426.
- Al-Obeidat, Feras, et al. "Automatic parameter settings for the PROAFTN classifier using hybrid particle swarm optimization." Canadian Conference on Artificial Intelligence. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010.
Moreover, the paper omits relevant literature on machine learning techniques that use interval learning for classification methods like in PROAFTN method.

**PeerJ Staff Note:** The PeerJ's policy is that any additional references suggested during peer review should only be included if the authors find them relevant and useful.

Consider some methods with their advantages and limitations. For example,
- Dayanik, Aynur. "Feature interval learning algorithms for classification." Knowledge-Based Systems 23.5 (2010): 402-417.
- De Chazal, Philip, Maria O'Dwyer, and Richard B. Reilly. "Automatic classification of heartbeats using ECG morphology and heartbeat interval features." IEEE transactions on biomedical engineering 51.7 (2004): 1196-1206.
- Demiröz, Gülşen, and H. Altay Güvenir. "Classification by voting feature intervals." European Conference on Machine Learning. Berlin, Heidelberg: Springer Berlin Heidelberg, 1997.
- Güvenir, H. A., and N. Emeksiz. "An expert system for the differential diagnosis of erythemato-squamous diseases." Expert Systems with Applications 18.1 (2000): 43-49.
- Belacel, N. “A Closest Resemblance Classifier with Feature Interval Learning and Outranking Measures for Improved Performance.” Algorithms. 2025; 18(1):7. https://doi.org/10.3390/a18010007.

**PeerJ Staff Note:** The PeerJ's policy is that any additional references suggested during peer review should only be included if the authors find them relevant and useful.

A more thorough review of prior work would help contextualize the proposed method within the broader field and clarify its novelty and contribution. I recommend that the authors expand the background section to include and critically discuss previous efforts in: using metaheuristics to improve classification methods, and classification methods based on Feature interval Learning.

It was proposed in 1999 [6], and the usual paper cited as first is [7], published in 2000 in EJOR, not one published in 2011, cited in the paper on page 2, line 58. The reference [2] on page 2, line 58 should be replaced by reference [6] and [7].

In line 79, page 2, the author should add the reference that was introduced the Ant Colony Optimization (ACO) by Marco Dorigo in the early 1990s in his Phd thesis titled: "Optimization, Learning and Natural Algorithms".
Some figures and algorithms are referenced in the text, but they are not displayed. For example, Figure 3 appears to be missing from the manuscript. While the caption and references to the figure are present in the text, the figure itself is not displayed. This may be due to a formatting or conversion issue (e.g., LaTeX to PDF rendering error). I recommend that the authors carefully review the compiled version of the manuscript to ensure all figures and algorithms are correctly embedded and visible. The inclusion of all referenced figures is essential for the clarity and completeness of the presentation.
As a reviewer is hard for me to report on methodology if the algorithms are not displayed correctly, as well as the flowchart of the ACO.

Experimental design

The inclusion of all referenced figures and algorithms is essential for the clarity and completeness of the presentation. As a reviewer, it is hard for me to report on the methodology if the algorithms and the flowchart of the methodology are not correctly displayed.

In Section Description of the algorithm, page 12: Line 457, the author refers to setting up the Genetic Algorithm parameters, but there is no indication that a GA is part of this method. This would be confusing or incorrect, unless GA is indeed part of the process, which is not the case.

The table presenting classification performance includes metrics such as accuracy, precision, and weighted Kappa. However, the reported values across some methods (decision tree, Neural Network) appear very close, making it difficult to draw robust conclusions based solely on raw performance scores. To strengthen the validity of the results, a statistical analysis (e.g., Wilcoxon signed-rank test, Friedman test) should be performed to assess whether the observed differences are statistically significant. This would provide more rigorous support for any claims of performance improvement or comparative advantage.

The manuscript includes an experimental comparison with a deep learning method, but it does not specify which deep learning model was used. Details such as the model architecture (e.g., CNN, MLP), hyperparameters, training configuration, and dataset preprocessing are essential to properly evaluate the fairness and validity of the comparison. The same issue with the other classifiers, including neural network and K-NN. In neuralnetworksk is it a multilayer perceptron with 1–2 hidden layers used? Also, how many number of neighbors k is used in k-nn? The author should clarify and provide more details on the hyperparameters used in different classifiers in Table 5.
The experimental evaluation includes a Decision Tree as a baseline model. However, it is well-established in the machine learning literature that Random Forest consistently outperforms single decision trees due to its ensemble nature and robustness against overfitting. To strengthen the credibility and completeness of the comparative analysis, I recommend including Random Forest as an additional baseline. This would provide a more meaningful benchmark and help position the proposed method relative to widely adopted and strong classification models.

Validity of the findings

-

All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.