All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.
Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.
The reviewers and I agreed the paper is ready for publication. Congrats
no comment
no comment
no comment
no comment
The article is acceptable for publication if the authors address the suggestions given by the reviewers (especially reviewer 2). Please, take it into account in the preparation of the new version of the manuscript.
[# PeerJ Staff Note: Please ensure that all review comments are addressed in a rebuttal letter and any edits or clarifications mentioned in the letter are also inserted into the revised manuscript where appropriate. It is a common mistake to address reviewer questions in the rebuttal letter but not in the revised manuscript. If a reviewer raised a question then your readers will probably have the same question so you should ensure that the manuscript can stand alone without the rebuttal letter. Directions on how to prepare a rebuttal letter can be found at: https://peerj.com/benefits/academic-rebuttal-letters/ #]
The article was clearly written and professionally presented.
References were used properly.
Research questions were well defined, relevant, and meaningful.
Mathematical calculations were mentioned in a perfect way.
Conclusions were well stated and linked to the original research.
The manuscript was pretty impressive and they did a great job in explaining in a very beautiful manner.
I'm impressed and loved the content.
No comment
No comment
No comment
The authors proposed a metric for classifier uncertainty. I think the idea is important and novel. I have some suggestions to improve the study:
- Literature review are weak. The authors should add a substantial amount of related references to support their hypothesis and findings.
- The authors only tested their methods on a use case from Kaggle competition. It is not enough to convince the generality of the model. Thus I suggest the authors provide more use cases to make the work stronger.
- Evaluation metrics (i.e. accuracy, confusion matrix, ...) have been used in previous biological works with small dataset such as PMID: 33036150, PMID: 32942564, and PMID: 31987913. Therefore, the authors are suggested to refer more works to attract broader readership.
- The authors have not explained well on the classifiers that they used.
- Did the authors have some independent test on the results?
No comment. The manuscript is well written and concise.
Normally, the sample size (24) and the wide spread of sizes (from 8 to 350) might be a concern, but the code being bundled allows for further verification.
The model defined for uncertainity quantification has been shown to arise from logical inconsistencies in the existing metrics (Caelen distributions). Furthermore, a full discussion of the prior considerations is also present. The fact that there exists large variation in the uncertainity of published classifier metrics is surprising, however, the analysis is valid and coherently presented.
All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.