All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.
Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.
The authors have addressed reviewer comments.
[# PeerJ Staff Note - this decision was reviewed and approved by Xiangjie Kong, a PeerJ Section Editor covering this Section #]
Paper is OK for Publication in its current form
Paper is OK for Publication in its current form
Paper is OK for Publication in its current form
Paper is OK for Publication in its current form
The study titled ”A Novel Transfer Learning Approach for Glioblastoma Survival and Grade Prediction” presents an intriguing application of transfer learning techniques to address the challenges posed by Glioblastoma, a malignant brain tumor. The paper’s abstract provides a clear overview of the research objectives and outcomes, which focus on survival and grade prediction. The significance of accurately predicting these factors in Glioblastoma patients cannot be understated due to the aggressive nature of the disease and its impact on patient outcomes. The utilization of various pre-trained networks, including EfficientNet, ResNet, VGG16, and Inception, is commendable, as it reflects a comprehensive approach to model selection and optimization. The authors’ decision to fine-tune these networks on a Glioblastoma image dataset aligns with best practices in the field, leveraging the knowledge captured by pre-trained models to enhance performance on a specific task. The reported results of 65The authors rightly attribute the success of their approach to the power of transfer learning. By leveraging the learned features from general tasks, the models achieve remarkable predictive accuracy, surpassing existing state-of-the-art methods. The practical implication of this finding extends beyond Glioblastoma research, as transfer learning holds potential in various medical image analysis scenarios with limited datasets. However, it is essential for the paper to provide a detailed account of the experimental setup, including the specifics of the dataset used, the criteria for patient classification into survival categories, and the methodology for tumor grade annotation. Additionally, insights into the computational resources required for fine-tuning
and testing these models would enhance the reproducibility of the study. In conclusion, the paper makes a valuable contribution to the field of medical image analysis by introducing a novel transfer learning approach for Glioblastoma survival and grade prediction. The experimental outcomes are promising and have the potential to significantly impact diagnostic and treatment strategies for Glioblastoma patients.
Authors need to revised the Rebutal as well as the paper and to clearly mentioned about their contribution and then revised the experimental design accordingly. Authors used transfer learning which is a pre-train model and therefore no contribution of the author, authors experiments are based on someone else data set so again what is their contribution. Moreover, if you not have their own data at least you can show some validation results for Zero-shot learning on unseen data.
Revised the experiment and include the portion of validation (beside testing and learning for Zero-shot learning on unseen data).
Discuss what is the major difference in you studey and those conducted before.... If you just use a new model which is pre-trained then I think this contribution is not suffucient to be published in a leading Journals. Zero-shot learning is important to verfiy that your accuracy is not because of overfitting. Your end product will ofcourse will be used on unseen data. Therfore, validated your model on unseen data such as MRI images collected from some local hospitals.
Please carefully respond to the reviewers' comments.
**PeerJ Staff Note:** Please ensure that all review, editorial, and staff comments are addressed in a response letter and any edits or clarifications mentioned in the letter are also inserted into the revised manuscript where appropriate.
The study titled "A Novel Transfer Learning Approach for Glioblastoma Survival and Grade Prediction" presents an intriguing application of transfer learning techniques to address the challenges posed by Glioblastoma, a malignant brain tumor. The paper's abstract provides a clear overview of the research objectives and outcomes, which focus on survival and grade prediction. The significance of accurately predicting these factors in Glioblastoma patients cannot be understated due to the aggressive nature of the disease and its impact on patient outcomes.
Concerns
My primary concern about this paper is its suitability to the Peerj Computer Science journal. Although this is an important topic and relevant research, the work seems to fit more into the healthcare/medical area than into the software Computer Science field. One may argue that there is no fine line between the two, and this probably true. However, looking at the details of the paper's background, references, introduction, and conclusion, it is evident to me that the paper serves more the healthcare and medical field (which is really broad, by the way). Therefore, if the authors still have arguments against this concern, I am open to considering their thoughts.
The utilization of various pre-trained networks, including EfficientNet, ResNet, VGG16, and Inception, is commendable, as it reflects a comprehensive approach to model selection and optimization. The authors' decision to fine-tune these networks on a Glioblastoma image dataset aligns with best practices in the field, leveraging the knowledge captured by pre-trained models to enhance performance on a specific task. The reported results of 65% accuracy in survival prediction and 97% accuracy in tumor grade classification underscore the effectiveness of the proposed methodology.
Concern
To me, an important point in a pure machine learning/Deep learning paper is its contribution. I see the function of the study as giving the reader conceptual structuring of the current and previous knowledge on the topic and then giving even some more. A good study highlights as its result what is already known and what knowledge gaps exist, also providing some possible steps for the future research. But a good study needs to also have a contribution, not only stating the facts. This contribution can be a model, theory, framework... something that gives the reader a deeper understanding of the phenomenon than what they can get by just reading the lists of facts. In a ML study, like in any paper, it is important to consider 1) contribution ("what's new?"), 2) impact ("so what?"), 3) logic ("why so?"), and 4) thoroughness ("well done?"). (see e.g. Webster & Watson). Now, in your paper, you have lots of facts but I'm missing answers to questions of "what's new? so what? and, why so?". I don't think that your contribution is very clear. How much is new in this article compared to the one that addresses already published literature. Furthermore, I will recommend the authors to use the concept of Zero-Shot Learning in validation phase. i.e. Validate the train model through unseen data
Secondly the methodology not well presented, in order to improve and strengthen the methodology part, I recommend including following a few latest articles related to brain tumor. I recommend further refinement of the experimental details and contextualization of the results in the final manuscript to ensure its comprehensiveness and impact.
The authors rightly attribute the success of their approach to the power of transfer learning. By leveraging the learned features from general tasks, the models achieve remarkable predictive accuracy, surpassing existing state-of-the-art methods. The practical implication of this finding extends beyond Glioblastoma research, as transfer learning holds potential in various medical image analysis scenarios with limited datasets.
However, it is essential for the paper to provide a detailed account of the experimental setup, including the specifics of the dataset used, the criteria for patient classification into survival categories, and the methodology for tumor grade annotation. Additionally, insights into the computational resources required for fine-tuning and testing these models would enhance the reproducibility of the study.
Concerns
Third major issue is the analysis and results are not enough to be published in such a leading journal. Third major issue is the analysis and results are not enough to be published in such a leading journal. I recommend including a few more analysisanalyses on the distribution of data like 80-20%, 70-30%, 60-40%, and 50-50% and choose the best one.
The study titled "A Novel Transfer Learning Approach for Glioblastoma Survival and Grade Prediction" presents an intriguing application of transfer learning techniques to address the challenges posed by Glioblastoma, a malignant brain tumor. The paper's abstract provides a clear overview of the research objectives and outcomes, which focus on survival and grade prediction. The significance of accurately predicting these factors in Glioblastoma patients cannot be understated due to the aggressive nature of the disease and its impact on patient outcomes.
The utilization of various pre-trained networks, including EfficientNet, ResNet, VGG16, and Inception, is commendable, as it reflects a comprehensive approach to model selection and optimization. The authors' decision to fine-tune these networks on a Glioblastoma image dataset aligns with best practices in the field, leveraging the knowledge captured by pre-trained models to enhance performance on a specific task. The reported results of 65% accuracy in survival prediction and 97% accuracy in tumor grade classification underscore the effectiveness of the proposed methodology.
The authors rightly attribute the success of their approach to the power of transfer learning. By leveraging the learned features from general tasks, the models achieve remarkable predictive accuracy, surpassing existing state-of-the-art methods. The practical implication of this finding extends beyond Glioblastoma research, as transfer learning holds potential in various medical image analysis scenarios with limited datasets.
However, it is essential for the paper to provide a detailed account of the experimental setup, including the specifics of the dataset used, the criteria for patient classification into survival categories, and the methodology for tumor grade annotation. Additionally, insights into the computational resources required for fine-tuning and testing these models would enhance the reproducibility of the study.
In conclusion, the paper makes a valuable contribution to the field of medical image analysis by introducing a novel transfer learning approach for Glioblastoma survival and grade prediction. The experimental outcomes are promising and have the potential to significantly impact diagnostic and treatment strategies for Glioblastoma patients.
Major Changes
My primary concern about this paper is its suitability to the Peerj Computer Science journal. Although this is an important topic and relevant research, the work seems to fit more into the healthcare/medical area than into the software Computer Science field. One may argue that there is no fine line between the two, and this probably true. However, looking at the details of the paper's background, references, introduction, and conclusion, it is evident to me that the paper serves more the healthcare and medical field (which is really broad, by the way). Therefore, if the authors still have arguments against this concern, I am open to considering their thoughts.
To me, an important point in a pure machine learning/Deep learning paper is its contribution. I see the function of the study as giving the reader conceptual structuring of the current and previous knowledge on the topic and then giving even some more. A good study highlights as its result what is already known and what knowledge gaps exist, also providing some possible steps for the future research. But a good study needs to also have a contribution, not only stating the facts. This contribution can be a model, theory, framework... something that gives the reader a deeper understanding of the phenomenon than what they can get by just reading the lists of facts. In a ML study, like in any paper, it is important to consider 1) contribution ("what's new?"), 2) impact ("so what?"), 3) logic ("why so?"), and 4) thoroughness ("well done?"). (see e.g. Webster & Watson). Now, in your paper, you have lots of facts but I'm missing answers to questions of "what's new? so what? and, why so?". I don't think that your contribution is very clear. How much is new in this article compared to the one that addresses already published literature. Furthermore, I will recommend the authors to use the concept of Zero-Shot Learning in validation phase. i.e. Validate the train model through unseen data
Secondly the methodology not well presented, in order to improve and strengthen the methodology part, I recommend including following a few latest articles related to brain tumor. I recommend further refinement of the experimental details and contextualization of the results in the final manuscript to ensure its comprehensiveness and impact.
Third major issue is the analysis and results are not enough to be published in such a leading journal. Third major issue is the analysis and results are not enough to be published in such a leading journal. I recommend including a few more analysisanalyses on the distribution of data like 80-20%, 70-30%, 60-40%, and 50-50% and choose the best one.
Overall Rating: Strongly Recommended with Revisions
no comment
no comment
no comment
The study utilized transfer learning on pre-trained networks to predict survival and tumor grade in Glioblastoma patients. The approach outperformed existing methods, highlighting the potential of transfer learning in advancing Glioblastoma diagnostics and treatment. It is overall a decent paper and I have the following 4 comments:
1. Could you provide further details on the rationale behind subsampling the data? The ratio of HGG to LGG is approximately 4:1 (293:76), which doesn't appear to be significantly imbalanced.
2. Lines 159 to 174 detail the imputation method for age and survival time for LGG patients. Given that the original data originates from a competition, how did the competition assess the results, especially when a key outcome like survival time is absent? Can you explain more on this.
3. Building on the first question, to what extent do the results hinge on the imputed survival time and age? A comparative table (table-one) showcasing statistics between LGG and HGG patients might be insightful.
4. As a suggestion, considering there are 12 figures in the paper, it might be beneficial to either relocate some to supplementary materials or consolidate them to conserve space.
All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.