Review History

All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.

Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.

View examples of open peer review.


  • The initial submission of this article was received on October 10th, 2018 and was peer-reviewed by 2 reviewers and the Academic Editor.
  • The Academic Editor made their initial decision on December 6th, 2018.
  • The first revision was submitted on January 11th, 2019 and was reviewed by the Academic Editor.
  • A further revision was submitted on January 30th, 2019 and was reviewed by the Academic Editor.
  • The article was Accepted by the Academic Editor on January 30th, 2019.

Version 0.3 (accepted)

· Jan 30, 2019 · Academic Editor


Thank you for your attention to the comments. Congratulations again!

# PeerJ Staff Note - this decision was reviewed and approved by Elena Papaleo, a PeerJ Section Editor covering this Section #

Version 0.2

· Jan 22, 2019 · Academic Editor

Minor Revisions

There is just a single aspect of the manuscript that would need to be addressed before finalizing acceptance. In your author list, you indicate the "Alzheimer’s Disease Neuroimaging Initiative" as authors. While data was used from this consortium, it is indicated as a footnote that this group did not contribute to the analysis or writing. It would normally be best to acknowledge the consortium as the source of the data, along with any relevant citations, rather than have them as part of the author list (which would normally imply that they also had the opportunity to review and or contribute to any other aspect of the manuscript).

So to summarize, what is being suggested is that this be removed from the author list and the Initiative instead being listed within the acknowledgements as well as cited wherever else appropriate. Thanks for addressing this and do contact myself or editorial staff if there are questions or concerns.

Version 0.1 (original submission)

· Dec 6, 2018 · Academic Editor

Minor Revisions

First, very sorry for the length of time it has taken to get your manuscript reviewed.

Please do address the reviewer concerns, with comparison to a generic decision tree and/or CORELS being very useful.


Basic reporting

Authors have presented a machine learning model named SHIMR for automated diagnosis of Alzheimer's and tested the proposed system on ADNI dataset. The detailed experimental results show that SHIMR is comparable to the comparative methods in terms of accuracy and it is comparatively more cost-effective and reliable.
1. Text in Figure 2(B) is hard to read. Text size should be increased.
2. Line#73: "He" should be named with citation to avoid ambiguity. Similar ambiguities should be removed.
3. Text in Figure 4(C) and Figure 4(D) is hard to read. Text size should be increased.

Experimental design

Decision tree and traditional machine learning methods are a good choice for this topic. However, authors should also review methods based on state of the art deep learning methods. Hyperspectral imaging can also aid in improving automated diagnosis. The following articles should be cited to improve the literature review:,,,

Validity of the findings

The experimental analysis is comprehensive and nicely concluded. Authors should also mention any possible risks involved in using machine learning for precision medicine and adopting SHIMR for automated diagnosis for Alzheimer's.

Reviewer 2 ·

Basic reporting

I see only small problems with the writing and presentation. The figures associated with the resulting rule systems are very well done.

Please convert the supplemental documents into PDFs to make them easier to access.

Figure 5 needs to be revised or the associated text changed. I attempted to manually determine the model score from the matched rules, but was only successful on the NC case (part A). I'm guessing that the only rule that matched in Part A was the first rule, thus making the score calculation work with the displayed results. For Part B, I calculated that the model score as displayed should be 0.68 (0.53+0.43-0.28), but the overall model score is 1.1. I'm assuming that some of the other excluded rules match for Part B, but this should be either acknowledged in the figure caption or the figure modified to show the full rule set and associated matches.

Experimental design

The design of novel algorithms/systems for explainable decision making is interesting. The associated source code appears to be complete with appropriate instructions for usage.

Validity of the findings

No major comments, all of the data and results appear to be statistically sound.

Can the authors add a statement concerning the standard runtime of the algorithm? What was the runtime to produce the rule set in Figure 4C?

Additional comments

This system is sufficiently different from a vanilla decision tree to merit publication, but I don't completely agree with the idea that it is somehow more interpretable than a vanilla decision tree. Comparing the rule set in Fig 4C and the decision tree in 4D, the process of using the decision tree seems simpler to me with a more straight forward interpretation.

Another system that produces similarly simple rule sets is the CORELS system that is cited in the paper. How many rules does it take for the CORELS output to reach a similar AUC to the SHIMR output?

All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.