All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.
Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.
The authors addressed the reviewer's concerns and substantially improved the content of MS. So, based on my own assessment as an editor, no further revisions are required and the MS can be accepted in its current form.
[# PeerJ Staff Note - this decision was reviewed and approved by Keith Crandall, a PeerJ Section Editor covering this Section #]
My previous comments have been well addressed.
the experimental design is not solid and sound
good
I have no further comments.
Your manuscript has been reviewed and requires several modifications prior to making a decision. The comments of the reviewers are included at the bottom of this letter. Reviewers indicated that the methods sections should be improved. The manuscript also needs extensive English editing because there are several typos and grammatical errors. I agree with the evaluation and I would, therefore, request for the manuscript to be revised accordingly.
[# PeerJ Staff Note: Please ensure that all review comments are addressed in a rebuttal letter and any edits or clarifications mentioned in the letter are also inserted into the revised manuscript where appropriate. It is a common mistake to address reviewer questions in the rebuttal letter but not in the revised manuscript. If a reviewer raised a question then your readers will probably have the same question so you should ensure that the manuscript can stand alone without the rebuttal letter. Directions on how to prepare a rebuttal letter can be found at: https://peerj.com/benefits/academic-rebuttal-letters/ #]
[# PeerJ Staff Note: The review process has identified that the English language must be improved. PeerJ can provide language editing services - please contact us at [email protected] for pricing (be sure to provide your manuscript number and title) #]
The work presenting by the authors are interesting and addressed an important issue in the resource-limited country. However, some improvement should be implemented before it can be accepted.
1. The paper is not well written. for example, the abstract is poorly written, and is not informative. many important information is missing and readers cannot have a general idea of the study design (such as sample size) and main results (such as performance metrics of the ML model). I strongly suggest the authors to report this paper as per the TRIPOD checklist (https://med.stanford.edu/content/dam/sm/s-spire/documents/ManuscriptQualityChecklists/Tripod-Checklist-Prediction-Model-Development-and-Validation.pdf)!
2. The baseline characteristics should be reported.
3. Many other important variables such as demographics, past history of pulmonary disease and smoking; these are readily available in resource-limited country; why not include these variables for your ML model?
Due to the imbalanced dataset, the accuracy cannot be used for evaluating the model. since you can predict all patients without COVID-19 and the accuracy can be 90% in your situation. Thus, I suggest to report the AUROC or PR curve for the evaluation. Furthermore, You also need to compare your model to the baseline model. The baseline model refers to a naive model that predict all patients without COVID-19.
The model testing is limited by the small sample size; there are only 10 events.
The work presenting by the authors are interesting and addressed an important issue in the resource-limited country. However, some improvement should be implemented before it can be accepted.
The overall quality of the writing is fine, but there are a number of grammatical and typographical mistakes that should be corrected. Please proofread.
The article meets the criteria, but would have been a lot stronger if a few things had been corrected:
1. The authors argue that one of the main benefits of using naive Bayes is the interpretability of the model, then state that in this case the model should not be interpreted. I agree with neither of these: the interpretability of the model is not an important benefit, and the model can be interpreted, with caution.
2. The authors completely ignore the capacity of the model to deal with missing data, when this is a key aspect of the problem
3. It is not clear whether the test set is actually resulting in fair evaluation. In the description, it is stated that it is used for training, and that the missing values are filled with the mean value _of the class_. If that is true, the test set is not unbiased and the corresponding results are not valid.
3. The graphs are not completely clear and seem to be slightly incorrect: the y-axis is not fully labelled, and results that should be 1 (the maximum possible value in this case) don't seem to be.
See above: the validity of some results is in doubt, and some results are not reported unambiguously.
The conclusions and the potential use cases are well developed.
All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.