PeerJ Computer Science:Computational Biologyhttps://peerj.com/articles/index.atom?journal=cs&subject=900Computational Biology articles published in PeerJ Computer ScienceSUTrans-NET: a hybrid transformer approach to skin lesion segmentationhttps://peerj.com/articles/cs-19352024-03-132024-03-13Yaqin LiTonghe TianJing HuCao Yuan
Melanoma is a malignant skin tumor that threatens human life and health. Early detection is essential for effective treatment. However, the low contrast between melanoma lesions and normal skin and the irregularity in size and shape make skin lesions difficult to detect with the naked eye in the early stages, making the task of skin lesion segmentation challenging. Traditional encoder-decoder built with U-shaped networks using convolutional neural network (CNN) networks have limitations in establishing long-term dependencies and global contextual connections, while the Transformer architecture is limited in its application to small medical datasets. To address these issues, we propose a new skin lesion segmentation network, SUTrans-NET, which combines CNN and Transformer in a parallel fashion to form a dual encoder, where both CNN and Transformer branches perform dynamic interactive fusion of image information in each layer. At the same time, we introduce our designed multi-grouping module SpatialGroupAttention (SGA) to complement the spatial and texture information of the Transformer branch, and utilize the Focus idea of YOLOV5 to construct the Patch Embedding module in the Transformer to prevent the loss of pixel accuracy. In addition, we design a decoder with full-scale information fusion capability to fully fuse shallow and deep features at different stages of the encoder. The effectiveness of our method is demonstrated on the ISIC 2016, ISIC 2017, ISIC 2018 and PH2 datasets and its advantages over existing methods are verified.
Melanoma is a malignant skin tumor that threatens human life and health. Early detection is essential for effective treatment. However, the low contrast between melanoma lesions and normal skin and the irregularity in size and shape make skin lesions difficult to detect with the naked eye in the early stages, making the task of skin lesion segmentation challenging. Traditional encoder-decoder built with U-shaped networks using convolutional neural network (CNN) networks have limitations in establishing long-term dependencies and global contextual connections, while the Transformer architecture is limited in its application to small medical datasets. To address these issues, we propose a new skin lesion segmentation network, SUTrans-NET, which combines CNN and Transformer in a parallel fashion to form a dual encoder, where both CNN and Transformer branches perform dynamic interactive fusion of image information in each layer. At the same time, we introduce our designed multi-grouping module SpatialGroupAttention (SGA) to complement the spatial and texture information of the Transformer branch, and utilize the Focus idea of YOLOV5 to construct the Patch Embedding module in the Transformer to prevent the loss of pixel accuracy. In addition, we design a decoder with full-scale information fusion capability to fully fuse shallow and deep features at different stages of the encoder. The effectiveness of our method is demonstrated on the ISIC 2016, ISIC 2017, ISIC 2018 and PH2 datasets and its advantages over existing methods are verified.Heart failure survival prediction using novel transfer learning based probabilistic featureshttps://peerj.com/articles/cs-18942024-03-122024-03-12Azam Mehmood QadriMuhammad Shadab Alam HashmiAli RazaSyed Ali Jafar ZaidiAtiq ur Rehman
Heart failure is a complex cardiovascular condition characterized by the heart’s inability to pump blood effectively, leading to a cascade of physiological changes. Predicting survival in heart failure patients is crucial for optimizing patient care and resource allocation. This research aims to develop a robust survival prediction model for heart failure patients using advanced machine learning techniques. We analyzed data from 299 hospitalized heart failure patients, addressing the issue of imbalanced data with the Synthetic Minority Oversampling (SMOTE) method. Additionally, we proposed a novel transfer learning-based feature engineering approach that generates a new probabilistic feature set from patient data using ensemble trees. Nine fine-tuned machine learning models are built and compared to evaluate performance in patient survival prediction. Our novel transfer learning mechanism applied to the random forest model outperformed other models and state-of-the-art studies, achieving a remarkable accuracy of 0.975. All models underwent evaluation using 10-fold cross-validation and tuning through hyperparameter optimization. The findings of this study have the potential to advance the field of cardiovascular medicine by providing more accurate and personalized prognostic assessments for individuals with heart failure.
Heart failure is a complex cardiovascular condition characterized by the heart’s inability to pump blood effectively, leading to a cascade of physiological changes. Predicting survival in heart failure patients is crucial for optimizing patient care and resource allocation. This research aims to develop a robust survival prediction model for heart failure patients using advanced machine learning techniques. We analyzed data from 299 hospitalized heart failure patients, addressing the issue of imbalanced data with the Synthetic Minority Oversampling (SMOTE) method. Additionally, we proposed a novel transfer learning-based feature engineering approach that generates a new probabilistic feature set from patient data using ensemble trees. Nine fine-tuned machine learning models are built and compared to evaluate performance in patient survival prediction. Our novel transfer learning mechanism applied to the random forest model outperformed other models and state-of-the-art studies, achieving a remarkable accuracy of 0.975. All models underwent evaluation using 10-fold cross-validation and tuning through hyperparameter optimization. The findings of this study have the potential to advance the field of cardiovascular medicine by providing more accurate and personalized prognostic assessments for individuals with heart failure.An efficient combined intelligent system for segmentation and classification of lung cancer computed tomography imageshttps://peerj.com/articles/cs-18022024-02-272024-02-27Maheswari SivakumarSundar ChinnasamyThanabal MS
Background and Objective
One of the illnesses with most significant mortality and morbidity rates worldwide is lung cancer. From CT images, automatic lung tumor segmentation is significantly essential. However, segmentation has several difficulties, such as different sizes, variable shapes, and complex surrounding tissues. Therefore, a novel enhanced combined intelligent system is presented to predict lung cancer in this research.
Methods
Non-small cell lung cancer should be recognized for detecting lung cancer. In the pre-processing stage, the noise in the CT images is eliminated by using an average filter and adaptive median filter, and histogram equalization is used to enhance the filtered images to enhance the lung image quality in the proposed model. The adapted deep belief network (ADBN) is used to segment the affected region with the help of network layers from the noise-removed lung CT image. Two cascaded RBMs are used for the segmentation process in the structure of ADBN, including Bernoulli–Bernoulli (BB) and Gaussian-Bernoulli (GB), and then relevant significant features are extracted. The hybrid spiral optimization intelligent-generalized rough set (SOI-GRS) approach is used to select compelling features of the CT image. Then, an optimized light gradient boosting machine (LightGBM) model using the Ensemble Harris hawk optimization (EHHO) algorithm is used for lung cancer classification.
Results
LUNA 16, the Kaggle Data Science Bowl (KDSB), the Cancer Imaging Archive (CIA), and local datasets are used to train and test the proposed approach. Python and several well-known modules, including TensorFlow and Scikit-Learn, are used for the extensive experiment analysis. The proposed research accurately spot people with lung cancer according to the results. The method produced the least classification error possible while maintaining 99.87% accuracy.
Conclusion
The integrated intelligent system (ADBN-Optimized LightGBM) gives the best results among all input prediction models, taking performance criteria into account and boosting the system’s effectiveness, hence enabling better lung cancer patient diagnosis by physicians and radiologists.
Background and Objective
One of the illnesses with most significant mortality and morbidity rates worldwide is lung cancer. From CT images, automatic lung tumor segmentation is significantly essential. However, segmentation has several difficulties, such as different sizes, variable shapes, and complex surrounding tissues. Therefore, a novel enhanced combined intelligent system is presented to predict lung cancer in this research.
Methods
Non-small cell lung cancer should be recognized for detecting lung cancer. In the pre-processing stage, the noise in the CT images is eliminated by using an average filter and adaptive median filter, and histogram equalization is used to enhance the filtered images to enhance the lung image quality in the proposed model. The adapted deep belief network (ADBN) is used to segment the affected region with the help of network layers from the noise-removed lung CT image. Two cascaded RBMs are used for the segmentation process in the structure of ADBN, including Bernoulli–Bernoulli (BB) and Gaussian-Bernoulli (GB), and then relevant significant features are extracted. The hybrid spiral optimization intelligent-generalized rough set (SOI-GRS) approach is used to select compelling features of the CT image. Then, an optimized light gradient boosting machine (LightGBM) model using the Ensemble Harris hawk optimization (EHHO) algorithm is used for lung cancer classification.
Results
LUNA 16, the Kaggle Data Science Bowl (KDSB), the Cancer Imaging Archive (CIA), and local datasets are used to train and test the proposed approach. Python and several well-known modules, including TensorFlow and Scikit-Learn, are used for the extensive experiment analysis. The proposed research accurately spot people with lung cancer according to the results. The method produced the least classification error possible while maintaining 99.87% accuracy.
Conclusion
The integrated intelligent system (ADBN-Optimized LightGBM) gives the best results among all input prediction models, taking performance criteria into account and boosting the system’s effectiveness, hence enabling better lung cancer patient diagnosis by physicians and radiologists.An efficient consolidation of word embedding and deep learning techniques for classifying anticancer peptides: FastText+BiLSTMhttps://peerj.com/articles/cs-18312024-02-202024-02-20Onur KarakayaZeynep Hilal Kilimci
Anticancer peptides (ACPs) are a group of peptides that exhibit antineoplastic properties. The utilization of ACPs in cancer prevention can present a viable substitute for conventional cancer therapeutics, as they possess a higher degree of selectivity and safety. Recent scientific advancements generate an interest in peptide-based therapies which offer the advantage of efficiently treating intended cells without negatively impacting normal cells. However, as the number of peptide sequences continues to increase rapidly, developing a reliable and precise prediction model becomes a challenging task. In this work, our motivation is to advance an efficient model for categorizing anticancer peptides employing the consolidation of word embedding and deep learning models. First, Word2Vec, GloVe, FastText, One-Hot-Encoding approaches are evaluated as embedding techniques for the purpose of extracting peptide sequences. Then, the output of embedding models are fed into deep learning approaches CNN, LSTM, BiLSTM. To demonstrate the contribution of proposed framework, extensive experiments are carried on widely-used datasets in the literature, ACPs250 and independent. Experiment results show the usage of proposed model enhances classification accuracy when compared to the state-of-the-art studies. The proposed combination, FastText+BiLSTM, exhibits 92.50% of accuracy for ACPs250 dataset, and 96.15% of accuracy for the Independent dataset, thence determining new state-of-the-art.
Anticancer peptides (ACPs) are a group of peptides that exhibit antineoplastic properties. The utilization of ACPs in cancer prevention can present a viable substitute for conventional cancer therapeutics, as they possess a higher degree of selectivity and safety. Recent scientific advancements generate an interest in peptide-based therapies which offer the advantage of efficiently treating intended cells without negatively impacting normal cells. However, as the number of peptide sequences continues to increase rapidly, developing a reliable and precise prediction model becomes a challenging task. In this work, our motivation is to advance an efficient model for categorizing anticancer peptides employing the consolidation of word embedding and deep learning models. First, Word2Vec, GloVe, FastText, One-Hot-Encoding approaches are evaluated as embedding techniques for the purpose of extracting peptide sequences. Then, the output of embedding models are fed into deep learning approaches CNN, LSTM, BiLSTM. To demonstrate the contribution of proposed framework, extensive experiments are carried on widely-used datasets in the literature, ACPs250 and independent. Experiment results show the usage of proposed model enhances classification accuracy when compared to the state-of-the-art studies. The proposed combination, FastText+BiLSTM, exhibits 92.50% of accuracy for ACPs250 dataset, and 96.15% of accuracy for the Independent dataset, thence determining new state-of-the-art.Arrhythmia classification for non-experts using infinite impulse response (IIR)-filter-based machine learning and deep learning models of the electrocardiogramhttps://peerj.com/articles/cs-17742024-01-242024-01-24Mallikarjunamallu KKhasim Syed
Arrhythmias are a leading cause of cardiovascular morbidity and mortality. Portable electrocardiogram (ECG) monitors have been used for decades to monitor patients with arrhythmias. These monitors provide real-time data on cardiac activity to identify irregular heartbeats. However, rhythm monitoring and wave detection, especially in the 12-lead ECG, make it difficult to interpret the ECG analysis by correlating it with the condition of the patient. Moreover, even experienced practitioners find ECG analysis challenging. All of this is due to the noise in ECG readings and the frequencies at which the noise occurs. The primary objective of this research is to remove noise and extract features from ECG signals using the proposed infinite impulse response (IIR) filter to improve ECG quality, which can be better understood by non-experts. For this purpose, this study used ECG signal data from the Massachusetts Institute of Technology Beth Israel Hospital (MIT-BIH) database. This allows the acquired data to be easily evaluated using machine learning (ML) and deep learning (DL) models and classified as rhythms. To achieve accurate results, we applied hyperparameter (HP)-tuning for ML classifiers and fine-tuning (FT) for DL models. This study also examined the categorization of arrhythmias using different filters and the changes in accuracy. As a result, when all models were evaluated, DenseNet-121 without FT achieved 99% accuracy, while FT showed better results with 99.97% accuracy.
Arrhythmias are a leading cause of cardiovascular morbidity and mortality. Portable electrocardiogram (ECG) monitors have been used for decades to monitor patients with arrhythmias. These monitors provide real-time data on cardiac activity to identify irregular heartbeats. However, rhythm monitoring and wave detection, especially in the 12-lead ECG, make it difficult to interpret the ECG analysis by correlating it with the condition of the patient. Moreover, even experienced practitioners find ECG analysis challenging. All of this is due to the noise in ECG readings and the frequencies at which the noise occurs. The primary objective of this research is to remove noise and extract features from ECG signals using the proposed infinite impulse response (IIR) filter to improve ECG quality, which can be better understood by non-experts. For this purpose, this study used ECG signal data from the Massachusetts Institute of Technology Beth Israel Hospital (MIT-BIH) database. This allows the acquired data to be easily evaluated using machine learning (ML) and deep learning (DL) models and classified as rhythms. To achieve accurate results, we applied hyperparameter (HP)-tuning for ML classifiers and fine-tuning (FT) for DL models. This study also examined the categorization of arrhythmias using different filters and the changes in accuracy. As a result, when all models were evaluated, DenseNet-121 without FT achieved 99% accuracy, while FT showed better results with 99.97% accuracy.Detection of renal cell hydronephrosis in ultrasound kidney images: a study on the efficacy of deep convolutional neural networkshttps://peerj.com/articles/cs-17972024-01-232024-01-23Umar IslamAbdullah A. Al-AtawiHathal Salamah AlwageedGulzar MehmoodFaheem KhanNisreen Innab
In the realm of medical imaging, the early detection of kidney issues, particularly renal cell hydronephrosis, holds immense importance. Traditionally, the identification of such conditions within ultrasound images has relied on manual analysis, a labor-intensive and error-prone process. However, in recent years, the emergence of deep learning-based algorithms has paved the way for automation in this domain. This study aims to harness the power of deep learning models to autonomously detect renal cell hydronephrosis in ultrasound images taken in close proximity to the kidneys. State-of-the-art architectures, including VGG16, ResNet50, InceptionV3, and the innovative Novel DCNN, were put to the test and subjected to rigorous comparisons. The performance of each model was meticulously evaluated, employing metrics such as F1 score, accuracy, precision, and recall. The results paint a compelling picture. The Novel DCNN model outshines its peers, boasting an impressive accuracy rate of 99.8%. In the same arena, InceptionV3 achieved a notable 90% accuracy, ResNet50 secured 89%, and VGG16 reached 85%. These outcomes underscore the Novel DCNN’s prowess in the realm of renal cell hydronephrosis detection within ultrasound images. Moreover, this study offers a detailed view of each model’s performance through confusion matrices, shedding light on their abilities to categorize true positives, true negatives, false positives, and false negatives. In this regard, the Novel DCNN model exhibits remarkable proficiency, minimizing both false positives and false negatives. In conclusion, this research underscores the Novel DCNN model’s supremacy in automating the detection of renal cell hydronephrosis in ultrasound images. With its exceptional accuracy and minimal error rates, this model stands as a promising tool for healthcare professionals, facilitating early-stage diagnosis and treatment. Furthermore, the model’s convergence rate and accuracy hold potential for enhancement through further exploration, including testing on larger and more diverse datasets and investigating diverse optimization strategies.
In the realm of medical imaging, the early detection of kidney issues, particularly renal cell hydronephrosis, holds immense importance. Traditionally, the identification of such conditions within ultrasound images has relied on manual analysis, a labor-intensive and error-prone process. However, in recent years, the emergence of deep learning-based algorithms has paved the way for automation in this domain. This study aims to harness the power of deep learning models to autonomously detect renal cell hydronephrosis in ultrasound images taken in close proximity to the kidneys. State-of-the-art architectures, including VGG16, ResNet50, InceptionV3, and the innovative Novel DCNN, were put to the test and subjected to rigorous comparisons. The performance of each model was meticulously evaluated, employing metrics such as F1 score, accuracy, precision, and recall. The results paint a compelling picture. The Novel DCNN model outshines its peers, boasting an impressive accuracy rate of 99.8%. In the same arena, InceptionV3 achieved a notable 90% accuracy, ResNet50 secured 89%, and VGG16 reached 85%. These outcomes underscore the Novel DCNN’s prowess in the realm of renal cell hydronephrosis detection within ultrasound images. Moreover, this study offers a detailed view of each model’s performance through confusion matrices, shedding light on their abilities to categorize true positives, true negatives, false positives, and false negatives. In this regard, the Novel DCNN model exhibits remarkable proficiency, minimizing both false positives and false negatives. In conclusion, this research underscores the Novel DCNN model’s supremacy in automating the detection of renal cell hydronephrosis in ultrasound images. With its exceptional accuracy and minimal error rates, this model stands as a promising tool for healthcare professionals, facilitating early-stage diagnosis and treatment. Furthermore, the model’s convergence rate and accuracy hold potential for enhancement through further exploration, including testing on larger and more diverse datasets and investigating diverse optimization strategies.An ensemble learning-based feature selection algorithm for identification of biomarkers of renal cell carcinomahttps://peerj.com/articles/cs-17682024-01-042024-01-04Zekun XinRuhong LvWei LiuShenghan WangQiang GaoBao ZhangGuangyu Sun
Feature selection plays a crucial role in classification tasks as part of the data preprocessing process. Effective feature selection can improve the robustness and interpretability of learning algorithms, and accelerate model learning. However, traditional statistical methods for feature selection are no longer practical in the context of high-dimensional data due to the computationally complex. Ensemble learning, a prominent learning method in machine learning, has demonstrated exceptional performance, particularly in classification problems. To address the issue, we propose a three-stage feature selection algorithm framework for high-dimensional data based on ensemble learning (EFS-GINI). Firstly, highly linearly correlated features are eliminated using the Spearman coefficient. Then, a feature selector based on the F-test is employed for the first stage selection. For the second stage, four feature subsets are formed using mutual information (MI), ReliefF, SURF, and SURF* filters in parallel. The third stage involves feature selection using a combinator based on GINI coefficient. Finally, a soft voting approach is proposed to employ for classification, including decision tree, naive Bayes, support vector machine (SVM), k-nearest neighbors (KNN) and random forest classifiers. To demonstrate the effectiveness and efficiency of the proposed algorithm, eight high-dimensional datasets are used and five feature selection methods are employed to compare with our proposed algorithm. Experimental results show that our method effectively enhances the accuracy and speed of feature selection. Moreover, to explore the biological significance of the proposed algorithm, we apply it on the renal cell carcinoma dataset GSE40435 from the Gene Expression Omnibus database. Two feature genes, NOP2 and NSUN5, are selected by our proposed algorithm. They are directly involved in regulating m5c RNA modification, which reveals the biological importance of EFS-GINI. Through bioinformatics analysis, we shows that m5C-related genes play an important role in the occurrence and progression of renal cell carcinoma, and are expected to become an important marker to predict the prognosis of patients.
Feature selection plays a crucial role in classification tasks as part of the data preprocessing process. Effective feature selection can improve the robustness and interpretability of learning algorithms, and accelerate model learning. However, traditional statistical methods for feature selection are no longer practical in the context of high-dimensional data due to the computationally complex. Ensemble learning, a prominent learning method in machine learning, has demonstrated exceptional performance, particularly in classification problems. To address the issue, we propose a three-stage feature selection algorithm framework for high-dimensional data based on ensemble learning (EFS-GINI). Firstly, highly linearly correlated features are eliminated using the Spearman coefficient. Then, a feature selector based on the F-test is employed for the first stage selection. For the second stage, four feature subsets are formed using mutual information (MI), ReliefF, SURF, and SURF* filters in parallel. The third stage involves feature selection using a combinator based on GINI coefficient. Finally, a soft voting approach is proposed to employ for classification, including decision tree, naive Bayes, support vector machine (SVM), k-nearest neighbors (KNN) and random forest classifiers. To demonstrate the effectiveness and efficiency of the proposed algorithm, eight high-dimensional datasets are used and five feature selection methods are employed to compare with our proposed algorithm. Experimental results show that our method effectively enhances the accuracy and speed of feature selection. Moreover, to explore the biological significance of the proposed algorithm, we apply it on the renal cell carcinoma dataset GSE40435 from the Gene Expression Omnibus database. Two feature genes, NOP2 and NSUN5, are selected by our proposed algorithm. They are directly involved in regulating m5c RNA modification, which reveals the biological importance of EFS-GINI. Through bioinformatics analysis, we shows that m5C-related genes play an important role in the occurrence and progression of renal cell carcinoma, and are expected to become an important marker to predict the prognosis of patients.MetaSwin: a unified meta vision transformer model for medical image segmentationhttps://peerj.com/articles/cs-17622024-01-032024-01-03Soyeon LeeMinhyeok Lee
Transformers have demonstrated significant promise for computer vision tasks. Particularly noteworthy is SwinUNETR, a model that employs vision transformers, which has made remarkable advancements in improving the process of segmenting medical images. Nevertheless, the efficacy of training process of SwinUNETR has been constrained by an extended training duration, a limitation primarily attributable to the integration of the attention mechanism within the architecture. In this article, to address this limitation, we introduce a novel framework, called the MetaSwin model. Drawing inspiration from the MetaFormer concept that uses other token mix operations, we propose a transformative modification by substituting attention-based components within SwinUNETR with a straightforward yet impactful spatial pooling operation. Additionally, we incorporate of Squeeze-and-Excitation (SE) blocks after each MetaSwin block of the encoder and into the decoder, which aims at segmentation performance. We evaluate our proposed MetaSwin model on two distinct medical datasets, namely BraTS 2023 and MICCAI 2015 BTCV, and conduct a comprehensive comparison with the two baselines, i.e., SwinUNETR and SwinUNETR+SE models. Our results emphasize the effectiveness of MetaSwin, showcasing its competitive edge against the baselines, utilizing a simple pooling operation and efficient SE blocks. MetaSwin’s consistent and superior performance on the BTCV dataset, in comparison to SwinUNETR, is particularly significant. For instance, with a model size of 24, MetaSwin outperforms SwinUNETR’s 76.58% Dice score using fewer parameters (15,407,384 vs 15,703,304) and a substantially reduced training time (300 vs 467 mins), achieving an improved Dice score of 79.12%. This research highlights the essential contribution of a simplified transformer framework, incorporating basic elements such as pooling and SE blocks, thus emphasizing their potential to guide the progression of medical segmentation models, without relying on complex attention-based mechanisms.
Transformers have demonstrated significant promise for computer vision tasks. Particularly noteworthy is SwinUNETR, a model that employs vision transformers, which has made remarkable advancements in improving the process of segmenting medical images. Nevertheless, the efficacy of training process of SwinUNETR has been constrained by an extended training duration, a limitation primarily attributable to the integration of the attention mechanism within the architecture. In this article, to address this limitation, we introduce a novel framework, called the MetaSwin model. Drawing inspiration from the MetaFormer concept that uses other token mix operations, we propose a transformative modification by substituting attention-based components within SwinUNETR with a straightforward yet impactful spatial pooling operation. Additionally, we incorporate of Squeeze-and-Excitation (SE) blocks after each MetaSwin block of the encoder and into the decoder, which aims at segmentation performance. We evaluate our proposed MetaSwin model on two distinct medical datasets, namely BraTS 2023 and MICCAI 2015 BTCV, and conduct a comprehensive comparison with the two baselines, i.e., SwinUNETR and SwinUNETR+SE models. Our results emphasize the effectiveness of MetaSwin, showcasing its competitive edge against the baselines, utilizing a simple pooling operation and efficient SE blocks. MetaSwin’s consistent and superior performance on the BTCV dataset, in comparison to SwinUNETR, is particularly significant. For instance, with a model size of 24, MetaSwin outperforms SwinUNETR’s 76.58% Dice score using fewer parameters (15,407,384 vs 15,703,304) and a substantially reduced training time (300 vs 467 mins), achieving an improved Dice score of 79.12%. This research highlights the essential contribution of a simplified transformer framework, incorporating basic elements such as pooling and SE blocks, thus emphasizing their potential to guide the progression of medical segmentation models, without relying on complex attention-based mechanisms.Survival and grade of the glioma prediction using transfer learninghttps://peerj.com/articles/cs-17232023-12-082023-12-08Santiago Valbuena RubioMaría Teresa García-OrdásOscar García-Olalla OliveraHéctor Alaiz-MoretónMaria-Inmaculada González-AlonsoJosé Alberto Benítez-Andrades
Glioblastoma is a highly malignant brain tumor with a life expectancy of only 3–6 months without treatment. Detecting and predicting its survival and grade accurately are crucial. This study introduces a novel approach using transfer learning techniques. Various pre-trained networks, including EfficientNet, ResNet, VGG16, and Inception, were tested through exhaustive optimization to identify the most suitable architecture. Transfer learning was applied to fine-tune these models on a glioblastoma image dataset, aiming to achieve two objectives: survival and tumor grade prediction.The experimental results show 65% accuracy in survival prediction, classifying patients into short, medium, or long survival categories. Additionally, the prediction of tumor grade achieved an accuracy of 97%, accurately differentiating low-grade gliomas (LGG) and high-grade gliomas (HGG). The success of the approach is attributed to the effectiveness of transfer learning, surpassing the current state-of-the-art methods. In conclusion, this study presents a promising method for predicting the survival and grade of glioblastoma. Transfer learning demonstrates its potential in enhancing prediction models, particularly in scenarios with limited large datasets. These findings hold promise for improving diagnostic and treatment approaches for glioblastoma patients.
Glioblastoma is a highly malignant brain tumor with a life expectancy of only 3–6 months without treatment. Detecting and predicting its survival and grade accurately are crucial. This study introduces a novel approach using transfer learning techniques. Various pre-trained networks, including EfficientNet, ResNet, VGG16, and Inception, were tested through exhaustive optimization to identify the most suitable architecture. Transfer learning was applied to fine-tune these models on a glioblastoma image dataset, aiming to achieve two objectives: survival and tumor grade prediction.The experimental results show 65% accuracy in survival prediction, classifying patients into short, medium, or long survival categories. Additionally, the prediction of tumor grade achieved an accuracy of 97%, accurately differentiating low-grade gliomas (LGG) and high-grade gliomas (HGG). The success of the approach is attributed to the effectiveness of transfer learning, surpassing the current state-of-the-art methods. In conclusion, this study presents a promising method for predicting the survival and grade of glioblastoma. Transfer learning demonstrates its potential in enhancing prediction models, particularly in scenarios with limited large datasets. These findings hold promise for improving diagnostic and treatment approaches for glioblastoma patients.AMSF: attention-based multi-view slice fusion for early diagnosis of Alzheimer’s diseasehttps://peerj.com/articles/cs-17062023-11-232023-11-23Yameng ZhangShaokang PengZhihua XueGuohua ZhaoQing LiZhiyuan ZhuYufei GaoLingfei Kong
Alzheimer’s disease (AD) is an irreversible neurodegenerative disease with a high prevalence in the elderly population over 65 years of age. Intervention in the early stages of AD is of great significance to alleviate the symptoms. Recent advances in deep learning have shown extreme advantages in computer-aided diagnosis of AD. However, most studies only focus on extracting features from slices in specific directions or whole brain images, ignoring the complementarity between features from different angles. To overcome the above problem, attention-based multi-view slice fusion (AMSF) is proposed for accurate early diagnosis of AD. It adopts the fusion of three-dimensional (3D) global features with multi-view 2D slice features by using an attention mechanism to guide the fusion of slice features for each view, to generate a comprehensive representation of the MRI images for classification. The experiments on the public dataset demonstrate that AMSF achieves 94.3% accuracy with 1.6–7.1% higher than other previous promising methods. It indicates that the better solution for AD early diagnosis depends not only on the large scale of the dataset but also on the organic combination of feature construction strategy and deep neural networks.
Alzheimer’s disease (AD) is an irreversible neurodegenerative disease with a high prevalence in the elderly population over 65 years of age. Intervention in the early stages of AD is of great significance to alleviate the symptoms. Recent advances in deep learning have shown extreme advantages in computer-aided diagnosis of AD. However, most studies only focus on extracting features from slices in specific directions or whole brain images, ignoring the complementarity between features from different angles. To overcome the above problem, attention-based multi-view slice fusion (AMSF) is proposed for accurate early diagnosis of AD. It adopts the fusion of three-dimensional (3D) global features with multi-view 2D slice features by using an attention mechanism to guide the fusion of slice features for each view, to generate a comprehensive representation of the MRI images for classification. The experiments on the public dataset demonstrate that AMSF achieves 94.3% accuracy with 1.6–7.1% higher than other previous promising methods. It indicates that the better solution for AD early diagnosis depends not only on the large scale of the dataset but also on the organic combination of feature construction strategy and deep neural networks.