Adaptive anisotropic convolutional network for renal neoplasm segmentation: A deep learning framework with directional feature learning
Abstract
Accurate segmentation of kidneys and renal tumors in CT images is crucial for the diagnosis of renal cancer. However, abdominal CT scans often exhibit anisotropic resolution between axial and cross-sectional directions. Conventional 3D convolutions assume isotropy, which leads to feature misalignment and degraded segmentation performance. To address this issue, we propose an adaptive anisotropic convolution module that integrates spatially separable convolution with standard 3D convolution through a dynamic selection mechanism. This adaptive fusion enhances feature extraction in anisotropic contexts. Furthermore, we design a deep 3D U-Net incorporating the proposed module and introduce cross-scale feature fusion in the encoder to mitigate detail loss during downsampling. Evaluated on the KiTS19 dataset (90 test cases). Our method achieves Dice scores of 97.04% for kidney and 85.10% for tumor segmentation, surpassing the baseline 3D U-Net by 1.19 and 3.30 percentage points, respectively, and outperforming state-of-the-art models such as nnFormer, 3DUX-Net, and SegMamba. These results demonstrate that our network significantly improves segmentation accuracy and offers robust technical support for renal cancer diagnosis.