Review History


All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.

Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.

View examples of open peer review.

Summary

  • The initial submission of this article was received on August 12th, 2020 and was peer-reviewed by 2 reviewers and the Academic Editor.
  • The Academic Editor made their initial decision on September 7th, 2020.
  • The first revision was submitted on September 29th, 2020 and was reviewed by 2 reviewers and the Academic Editor.
  • The article was Accepted by the Academic Editor on October 1st, 2020.

Version 0.2 (accepted)

· Oct 1, 2020 · Academic Editor

Accept

Both reviewers consider that the new version for the paper is acceptable. Congratulations

Reviewer 1 ·

Basic reporting

no comment

Experimental design

no comment

Validity of the findings

no comment

Additional comments

All the raised issues have been reasonably solved.

Reviewer 2 ·

Basic reporting

The authors have addressed all the changes that were requested. I recommend the acceptation of the paper.

Experimental design

ok

Validity of the findings

ok

Version 0.1 (original submission)

· Sep 7, 2020 · Academic Editor

Minor Revisions

The paper is almost ready to be accepted. Please prepare a new version addressing the issues pointed by the reviewers.

Reviewer 1 ·

Basic reporting

No comment

Experimental design

No comment

Validity of the findings

No comment

Additional comments

The paper is scientifically sound and well written, with an exhaustive set of experiments on a suite of different synthetic and real datasets. I have only two points I would like to see better clarified
- there is a non-negligible overlap with Slavkov, 2018; although this is acknowledged by the authors, I think they should stress more the novelty of the present contribution w.r.t. their preliminary work;
- the number of alternatives shown (e.g. the different predictive models for building the FFA/RFA curves) provides a broad and solid landscape, but they are not helping the researcher wanting to apply the proposed method (e.g. what should I do when different models provide very different rankings? Which model should I trust). I think that the paper would greatly benefit by adding a working recipe schema for ta practitioner wanting to practically use the method in an effective way.

Finally, the main text is quite dense and rich in figures - I would rather move more material in the Appendix/SuppMat to better highlight the key messages in the main text avoiding them to sink in a sea of results which may prevent a non-specialistic reader to fully grasp the overall meaning.

Reviewer 2 ·

Basic reporting

The authors address an interesting problem within the context of feature ranking. They present a method for evaluating feature ranking algorithms. The method is based on computing the correctness of the feature ranking based on the accuracy achieved with it using different learning algorithms. Based in two different chains and a simple idea it seems to provide meaningful information about the rankings.

The state-of-the-art work is very complete, although I would suggest to include more recent references about approaches to measure feature stability.


The contribution is stated clearly.

Experimental design

Extensive experimental work is carried out incluging synthetic and real world dataset.

However, the choice of the best learning methods for building the curves ( SVMs and K-NN) is not well justified. I encourage the authors to clarify this point.

Could this depend on the type of dataset: number of instantes, dimensionality, noise ?

Validity of the findings

Specify the limitations and drawbacks of the proposed method.

Specify if this methods could apply to regression problems.

All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.