Review History


All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.

Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.

View examples of open peer review.

Summary

  • The initial submission of this article was received on September 2nd, 2020 and was peer-reviewed by 3 reviewers and the Academic Editor.
  • The Academic Editor made their initial decision on November 22nd, 2020.
  • The first revision was submitted on December 22nd, 2020 and was reviewed by 1 reviewer and the Academic Editor.
  • The article was Accepted by the Academic Editor on January 27th, 2021.

Version 0.2 (accepted)

· Jan 27, 2021 · Academic Editor

Accept

The reviewers and I agreed the paper is ready for publication. Congrats

Reviewer 2 ·

Basic reporting

no comment

Experimental design

no comment

Validity of the findings

no comment

Additional comments

no comment

Version 0.1 (original submission)

· Nov 22, 2020 · Academic Editor

Major Revisions

The article is acceptable for publication if the authors address the suggestions given by the reviewers (especially reviewer 2). Please, take it into account in the preparation of the new version of the manuscript.

[# PeerJ Staff Note: Please ensure that all review comments are addressed in a rebuttal letter and any edits or clarifications mentioned in the letter are also inserted into the revised manuscript where appropriate.  It is a common mistake to address reviewer questions in the rebuttal letter but not in the revised manuscript. If a reviewer raised a question then your readers will probably have the same question so you should ensure that the manuscript can stand alone without the rebuttal letter.  Directions on how to prepare a rebuttal letter can be found at: https://peerj.com/benefits/academic-rebuttal-letters/ #]

Reviewer 1 ·

Basic reporting

The article was clearly written and professionally presented.
References were used properly.

Experimental design

Research questions were well defined, relevant, and meaningful.

Mathematical calculations were mentioned in a perfect way.

Validity of the findings

Conclusions were well stated and linked to the original research.

Additional comments

The manuscript was pretty impressive and they did a great job in explaining in a very beautiful manner.
I'm impressed and loved the content.

Reviewer 2 ·

Basic reporting

No comment

Experimental design

No comment

Validity of the findings

No comment

Additional comments

The authors proposed a metric for classifier uncertainty. I think the idea is important and novel. I have some suggestions to improve the study:
- Literature review are weak. The authors should add a substantial amount of related references to support their hypothesis and findings.
- The authors only tested their methods on a use case from Kaggle competition. It is not enough to convince the generality of the model. Thus I suggest the authors provide more use cases to make the work stronger.
- Evaluation metrics (i.e. accuracy, confusion matrix, ...) have been used in previous biological works with small dataset such as PMID: 33036150, PMID: 32942564, and PMID: 31987913. Therefore, the authors are suggested to refer more works to attract broader readership.
- The authors have not explained well on the classifiers that they used.
- Did the authors have some independent test on the results?

·

Basic reporting

No comment. The manuscript is well written and concise.

Experimental design

Normally, the sample size (24) and the wide spread of sizes (from 8 to 350) might be a concern, but the code being bundled allows for further verification.

Validity of the findings

The model defined for uncertainity quantification has been shown to arise from logical inconsistencies in the existing metrics (Caelen distributions). Furthermore, a full discussion of the prior considerations is also present. The fact that there exists large variation in the uncertainity of published classifier metrics is surprising, however, the analysis is valid and coherently presented.

All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.