Enhancing time series forecasting with Kolmogorov-Arnold networks and a robust hybrid loss function
Abstract
This paper studies Kolmogorov–Arnold Networks (KANs) for predictive modeling, with both theoretical value in improving interpretability and practical value for tasks such as financial time-series forecasting. Existing approaches typically use node-based activations and linear edge mappings, or global basis expansions such as Jacobi polynomials. However, these methods lack adaptive error penalization, are insensitive to directional bias, and can suffer from gradient instability. To address these issues, we relocate activation functions from nodes to edges and model edge mappings with univariate spline functions, which allow local refinement and greater flexibility. We also propose a Huber Log-Cosh quantile (HLQ) loss that combines Huber robustness, log-cosh smoothness, and quantile-aware penalization to adaptively weight errors by magnitude and direction. On 23 classic benchmark tests our framework reached the ideal optimum in 78.2% of cases. Experiments on Amazon stock price data using the enhanced KAN embedded in a GRU architecture show a 6% improvement on the training set and an 8% improvement on the test set versus the original KAN; additional validations with BiLSTM, TCN, and LSTM confirm the method's robustness and generalizability.