Review History


All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.

Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.

View examples of open peer review.

Summary

  • The initial submission of this article was received on July 26th, 2024 and was peer-reviewed by 2 reviewers and the Academic Editor.
  • The Academic Editor made their initial decision on September 6th, 2024.
  • The first revision was submitted on September 30th, 2024 and was reviewed by 2 reviewers and the Academic Editor.
  • The article was Accepted by the Academic Editor on October 10th, 2024.

Version 0.2 (accepted)

· Oct 10, 2024 · Academic Editor

Accept

Congratulations on the acceptance of the manuscript.

I reviewed the response to reviewers to check why reviewer #2 made this comment. Looking at their response, I agree that their explanation was sufficient such that edits did not need to be made to the manuscript.

Reviewer 1 ·

Basic reporting

No comment.

Experimental design

No comment.

Validity of the findings

No comment.

Additional comments

The authors have addressed all the comments.

·

Basic reporting

Greetings, I can't find the corrections to the indicated observations, such as adding the confusion matrix of training and validation, as well as the explanation of why I use cv= 5 and not another value, etc.

Experimental design

---

Validity of the findings

--

Additional comments

---

Version 0.1 (original submission)

· Sep 6, 2024 · Academic Editor

Major Revisions

Please provide a revision, responding to those reviewers comments that you agree with in order to improve the quality of your submission.

Reviewer 1 ·

Basic reporting

Please refer to more recent references. It is recommended that the manuscript be thoroughly reviewed for grammar, punctuation, and sentence structure to enhance clarity and readability.

Experimental design

1. How can the model be implemented in clinical practice, and what are its advantages in a clinical setting?
2. How does the study intend to improve clinical decision-making in the context of CKD?
How were the 53 laboratory features selected, and could significant predictors have been excluded due to feature selection techniques?
3. What justifies the division of the dataset into a modelling dataset (n=296) and a validation dataset (n=71)? Is this sample size sufficient for robust model development and validation?
4. Did the study address the data imbalance issue, especially considering that only 148 out of 987 patients had kidney failure? How might this impact the performance of the model?

Validity of the findings

1. Is the decrease in AUC from the validation dataset (0.896) to the external dataset (0.771) concerning, and what does it reveal about the model’s stability and robustness?
2. What are the potential consequences if the model fails to predict CKD progression correctly, and how should clinicians mitigate such risks?

Additional comments

The resolution of the figure can be enhanced for better clarity and visual quality.

·

Basic reporting

no comments

Experimental design

The article does not indicate what gap it is filling.
Draw a diagram to detail the procedure of the proposal.

Validity of the findings

Improve the conclusions in relation to the findings.

Additional comments

* In item 1:
Add 5 related studies as model assembly and likewise indicate how your proposed model differs.
*In item 2:
a) Make a diagram where you indicate the procedure of the experiment.
*In point 2.4
a) On what did you base your decision to use cross validation cv = 5?
b) Why didn't you do experiments with cv 5 to cv 10 in order to find the best model.
c) Also, could you indicate the parameters you used in each of the models.
*In point 4:
Make discussion of the results you ah obtained with respect to your background of point 1.
In figure 1:
*On which theories were based to perform the proposed Stacking with the XGBoost, LightGBM, Random Forest and logistic regression algorithms and not other algorithms.
*Place a correlation graph and determine the influence of each of them (variables) with the prediction of chronic kidney disease.
*To make a scaling of the variables to improve the result obtained.
*Place the confusion matrix obtained from both training and validation.
*Could indicate the practical and theoretical implications of the model.
*Indicate the limitations as well as the challenges and future work of the work.

All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.