Background: Breast cancer remains one of the most prevalent malignancies globally, necessitating accurate, noninvasive diagnostic strategies. This study introduces a novel multimodal deep learning framework utilizing an Adaptive Modality Weighting Convolutional Neural Network (AMW-CNN) for classifying benign and malignant breast masses based on mammography, ultrasound, and clinical data.
Methods: This cross-sectional study included 92 women with biopsy-confirmed breast masses. Imaging data were preprocessed using the Segment Anything Model (SAM), and data augmentation increased the dataset to 10,000 images. We developed and evaluated five multimodal deep learning models (ANN, RNN, Transformer, CNN, and AMW-CNN), alongside their unimodal counterparts. Performance was assessed via 5-fold cross-validation using accuracy, sensitivity, specificity, precision, Matthews Correlation Coefficient (MCC), and Area Under the ROC Curve (AUC). Statistical analyses included McNemar’s test for model comparison and independent t-tests for age differences.
Results: The proposed AMW-CNN model achieved superior performance across all metrics, with an AUC of 99.93%, accuracy of 99.08%, sensitivity of 99.36%, specificity of 98.79%, precision of 98.82%, and MCC of 98.16%. It significantly outperformed other models (p < 0.05). Additionally, AMW-CNN achieved AUCs of 99.98% and 99.96% when used with ultrasound and mammography alone, respectively. Age was also found to be a significant factor distinguishing malignant from benign masses (p < 0.001).
Conclusion: The integration of multimodal imaging and clinical data, guided by adaptive weighting, substantially improves diagnostic accuracy in breast cancer classification. The AMW-CNN model shows robust potential for clinical implementation, although validation on larger, multicenter datasets is warranted.
If you have any questions about submitting your review, please email us at [email protected].