All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.
Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.
All issues indicated by the reviewers were addressed and the manuscript was amended accordingly. Therefore, the revised version is acceptable now.
[# PeerJ Staff Note - this decision was reviewed and approved by Sonia Oliveira, a PeerJ Section Editor covering this Section #]
No comments
No comment
No comment
Please address concerns of both reviewers and revise manuscript accoridngly.
1. The whole manuscript has a lot of identifiers and should not have reached the reviewer without a relook.
2. The English language in the manuscript needs a major edit. The language in introduction and discussion section is ambiguous and too generalized for the subject.
3. The abstract needs to be re-written completely. It lacks a proper background on the subject.
The abstract should highlight the problem and the need of the solution offered by the authors. The material and methods in the abstract section should be shortened and focus should be on relevant results and conclusion.
4. Raw data/ supplementary data should be provided for the methodologies of different parameter evaluation. For example, CD markers were probably evaluated by flow cytometry, however it is not mentioned anywhere in the manuscript.
5. The Introduction is too generalized and poorly written. It should highlight the problem addressed by the authors and how AI is going to solve it.
6. Discussion section is again written only mentioning or repeating the results. Rather than general comments on the utility it should mention some of the literature review and stress upon the relevance of results.
1. The authors have worked extensively in AI in this manuscript but have only evaluated the common laboratory parameters.
Radiological investigations like X-ray, whole body MRI and PET-CT are an integral part to plasma cell disorder diagnosis. How exclusion of these parameters can create a bias in the algorithm may be explained.
2. Simple laboratory tests like serum protein electrophoresis, IFE and flow cytometry are widely available. Even though the AI based algorithm fails to identify a malignancy does the authors recommend further testing?
It will be great if the authors can provide an algorithmic approach to diagnose plasma cell dyscrasias at the end of discussion keeping AI in the algorithm.
1. Authors have used extensive retrospective data. The external validation cohort is small. How the authors are going to address it. Is there a plan for further validation including more parameters.
2. The false positive and false negative results based on AI algorithm may be mentioned in results and discussion to highlight the superiority of the methodology.
No comment
No comment
No comment
It is better to write the diagnostic criteria used by clinicians as the basis for dividing between MM and non-MM, so that the practical value of the AI used by researchers can be seen. The non-invasive advantages of the 9 parameters used need to be emphasized more.
All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.