A lightweight Texture-Aware Deep Convolutional Neural Network (TADCNN) for histopathological classification of lung cancer
Abstract
Background: Accurate lung–cancer subtype classification from microscopic tissue images is difficult because textures are highly similar and morphological differences are subtle. Recent, CNN/Transformer approaches can be accurate. However, their heavy backbones introduce high latency and memory demands and offer limited mechanisms to encode pathology–specific texture priors, constraining clinical deployment. Method: We present TADCNN, a lightweight, texture-aware CNN that operates across modalities (histology tiles and CT slices) without architecture changes. TADCNN couples a scale-conditional multiscale texture encoder (SC-PTEM) with a texture-aware attention module (TAAM). Results: Together, these modules jointly model spatial and channel saliency. Under the same training protocol, TADCNN attains 99.84% accuracy on LC25000 and 99.55% on IQ-OTH/NCCD. It outperforms heavy CNN/Transformer backbones and lightweight mobile baselines (MobileNetV2, ShuffleNetV2). Despite higher accuracy, TADCNN uses fewer parameters and FLOPs than the mobile baselines and runs in real time, enabling point-of-care deployment. These results show that careful multi-scale texture encoding plus joint spatial + channel attention delivers state-of-the-art accuracy with deployable efficiency and cross-modal versatility.