[Experimental] List of manuscripts available for review volunteers
1 manuscript available for review volunteers
November 9, 2017

Over the last decades, clinical decision support systems have been gaining importance. They help clinicians to make effective use of the overload of available information to obtain correct diagnoses and appropriate treatments. However, their power often comes at the cost of a black box model which cannot be interpreted easily. This interpretability is of paramount importance in a medical setting with regard to trust and (legal) responsibility. In contrast, existing medical scoring systems are easy to understand and use, but they are often a simplified rule-of-thumb summary of previous medical experience rather than a well-founded system based on available data.

Interval Coded Scoring (ICS) connects these two approaches, exploiting the power of sparse optimization to derive scoring systems from training data. The presented toolbox interface makes this theory easily applicable to both small and large datasets. It contains two possible problem formulations based on linear programming or elastic net. Both allow to construct a model for a (binary) classification problem and establish risk profiles that can be used for future diagnosis. All of this requires only a few lines of code.

ICS differs from standard machine learning through its model consisting of interpretable main effects and interactions. Furthermore, insertion of expert knowledge is possible because the training can be semi-automatic. This allows end users to make a trade-off between complexity and performance based on cross-validation results and expert knowledge.

Additionally, the toolbox offers an accessible way to assess classification performance via accuracy and the ROC curve, whereas the calibration of the risk profile can be evaluated via a calibration curve. Finally, the colour-coded model visualization has particular appeal if one wants to apply ICS manually on new observations, as well as for validation by experts in the specific application domains.

The validity and applicability of the toolbox is demonstrated by comparing it to standard Machine Learning approaches such as Naive Bayes and Support Vector Machines for several real-life datasets. These case studies on medical problems show its applicability as a decision support system. ICS performs similarly in terms of classification and calibration. Its slightly lower performance is countered by its model simplicity which makes it the method of choice if interpretability is a key issue.

FAQs

Is this open peer review?

No, peer review is still single-blind and all recommendations are private between the authors and Academic Editor. However, any reviewer has the option to sign their report, and once accepted for publication then that review can be shown publicly - again this is optional.

Will I be guaranteed to review if I volunteer?

No. Volunteering is not a guarantee that you will be asked to review. This is for many reasons. For one, reviewers must have relevant qualifications for any manuscript and void of any conflicts of interest. Additionally, it could be that enough reviewers have accepted an invitation to review already, in which case we would not invite any more.

Why aren't there more manuscripts available?

Manuscripts are shown when authors have opted-in for obtaining reviewers through the reviewer-match service. Additionally, there may already be enough reviewers found through other means, for example, invitations sent by the Academic Editor in charge.

What are the editorial criteria?

Please visit the editorial criteria page for initial guidance. You will also be given additional information if invited to review.