All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.
Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.
The GitHub repo looks great. Thanks and congratulations!
Thanks for updating those figures. I am so sorry, but I realized there is one more issue that needs to be addressed. The code is available in GitHub, but there is no README page or instructions on how to run the code. I would like to test your code to make sure it runs properly. Please update the GitHub site so that it is clear what needs to be done. What software (and versions) need to be installed? How is it executed, etc.
Thank you for addressing the reviewers' comments. I request that you make a few additional changes. Please modify figures 2-4 so that they use a white background (with gray grid lines) rather than a gray background. It's a bit difficult to see the confidence bounds on the lines in the graphs with the gray background. Also, please update the color schemes on these figures to use a colorblind-friend color palette. 8% of men and 0.5% of women are red/green colorblind and thus may not be able to differentiate those lines very well. One option for finding colorblind-friendly palettes is to use colorbrewer2.org. Sorry that I did not notice this earlier.
I apologize for not getting this back to you sooner. It happened that I was away from Internet access for 10 days right after the second review was submitted.
The reviewers have provided overall favorable comments, but they have both noted areas for improvement. Most of these are related to the actual manuscript as opposed to the research methodology. Therefore, I have made a "Minor Revisions" decision. Please make sure to address each of their comments carefully. Make sure also that the writing and grammar are accurate and consistent throughout the manuscript.
[# PeerJ Staff Note: The review process has identified that the English language must be improved. PeerJ can provide language editing services - please contact us at firstname.lastname@example.org for pricing (be sure to provide your manuscript number and title) #]
The authors presented the background very clearly, with up to date statistics. They reviewed widely and provided good summary of previous works on CNNs for malaria detection.
Figure 4 presents too much information, most of which not referenced in the main text. These figures may be presented as supplementary instead.
While the research area is not new, the authors defined a relevant gap and presented a sound machine learning pipeline.
The experiments done were very rigorous and commendable.
While the topic and application of resolution upscaling is interesting and good results were demonstrated, the motivation behind it is not clear in the text. The flow of the narrative would greatly improve if reasons are explicitly stated. Otherwise, the section seems a bit separated from the rest.
The section on the integration to the mobile platform seems insufficient. One of the main ‘selling point’ is to be able to run this system in low-resource countries but very little weight was given to this topic. Perhaps additional implementation details and a review on other CV apps on Android may help strengthen the section.
Performance metrics were defined well and although previous works performed better, the authors provided sound reasoning as to why it was the case and why their model is still worth looking into.
Limitations were also stated.
The conclusion involves code modularity but this is not well-argued in the paper. While there are obvious components in the proposed platform, the replaceability of each is not well-established.
1.1 Professional standards of clarity, non-ambiguity, technically correct English
The introduction is very well-crafted English. However, in the methods, results, and discussion sections, there are occasional words missing. In addition, in these sections, there are several instances of subject-verb number disagreement (annotated in PDF).
1.2 Literature references including sufficient field background and context
The methods section is severely under-referenced and does not indicate the sources, either academic or commercial, of most of the algorithms and platforms used. Computing terminology is not consistently explained at a level appropriate for a medical/biological journal.
1.3 Professional article structure, figures, tables
Text is in the correct article structure. Figures and tables are interpolated in the PDF provided to reviewers. Correct procedure is to submit figures and tables as separate files; however, the manuscript PDF provided may have been integrated by the PeerJ content management system. No other issues.
1.4 Self-contained with relevant results to hypotheses.
It should be made clearer in the abstract and introduction that the primary goal is proof-of-concept of a smartphone-executed detection algorithm without need for high-speed internet connection(s). Otherwise no issues.
The research is relevant to medical practice and is within scope.
2.2 Research question well defined, relevant and meaningful
The research goal is well defined and shown to be relevant and meaningful. A sharper emphasis on the exact knowledge gap (a smartphone-only based app suitable for field use in Africa) and best applications of the knowledge obtained would be helpful (see comments in PDF).
2.3 Rigorous investigation, high technical and ethical standards
2.4 Methods described with sufficient detail & information to replicate
Some ambiguity in which algorithms, and which versions of which algorithms, were used. Insufficient references given in Methods section to fully source all algorithms. In a few cases, ambiguity existed on which dataset was used in which step.
3.1 Novelty and replication
Novelty is assessed but only in terms of the goal parameters (development of smartphone-only algorithm). Meaningful replication is encouraged for further development of robustness.
3.2 Data integrity
3.3 Sound conclusions
Conclusions do not extend beyond the goal of proof-of-concept, appropriate for the work performed.
3.4 Speculation identified
This is a work advancing the field of automated malarial smear reading with the notable step of eliminating the need for cloud-based processing. The authors show acceptable performance within the limits of their training sets; however, they acknowledge that better datasets with wider variety and less prone to overtraining are needed for production of a usable tool in field medical work. The manuscript as currently published has nonstandard reference formatting and insufficient references, particularly in the Methods section. It also has insufficient explanation of computational technical terms to fit the journal’s intended scope. However, the methods and results are solid overall and the topic is appropriate and sufficiently interesting to merit consideration.
All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.