Accurate and explainable ICD-10 coding through multi-stage model adaptation and evidence-guided verification
Abstract
The International Classification of Diseases (ICD) coding system plays a vital role in insurance reimbursement, patient care, and health monitoring. Existing methods, however, face challenges in generating accurate and explainable ICD codes due to domain misalignment, imbalanced label distribution, and limited clinical evidence discovery. This study aims to develop and evaluate an approach to adapt a Large Language Model (LLM) for accurate and explainable ICD-10 coding. To address these limitations, we introduce three advances: first, a two-stage Supervised Fine-Tuning (SFT) strategy to progressively align the model with ICD-specific knowledge while mitigating catastrophic forgetting; second, a novel Reinforcement Learning (RL) algorithm with task-adaptive reward and advantage functions to handle hierarchical structure in ICD-10 coding; and finally, a Retrieval-Augmented Generation (RAG)-based verifier to filter unsupported predictions. On the MIMIC-IV dataset, the model trained via the two-stage SFT strategy achieved a 49.6-point improvement in Micro-F1 over the base model (from 14.4% to 64.0%). ICD-specific RL training further raised performance by 2.7 points to 66.7%, and the verification stage achieved an optimal Micro-F1 of 67.5%, outperforming the previous state-of-the-art by 9 points. These methodological advances significantly boost coding accuracy and explainability, ensuring clinically grounded predictions. The proposed approach offers an accurate and explainable solution for automated ICD-10 coding, with potential for real-world clinical deployment.