LightSNN: An energy-aware spiking neural network architecture for time series forecasting
Abstract
Spiking Neural Networks (SNNs) offer significant energy efficiency compared to traditional deep learning models, though they often suffer from reduced prediction accuracy when forecasting continuous-valued signals. We introduce LightSNN, a time-series forecasting architecture that bridges this gap by holistically optimizing for both forecast accuracy and energy efficiency. Based on the Leaky Integrate-and-Fire (LIF) neuron model, LightSNN enhances performance by unifying three orthogonal mechanisms: (1) lightweight temporal gating to prune non-salient input features, (2) neuron-wise adaptive thresholding to dynamically regulate firing activity toward a target sparsity, and (3) knowledge distillation from a high-performing non-spiking teacher model to preserve the amplitude fidelity of the output signal. A tailored two-phase training framework leverages surrogate gradient backpropagation to jointly optimize for accurate regression and low computational overhead. Extensive experiments across four real-world datasets show that LightSNN matches or surpasses a standard RNN-based baseline in forecasting accuracy while dramatically reducing the estimated Energy-Delay Product, as verified through both analytical MAC/AC evaluations and neuromorphic hardware profiling. By combining event-driven sparsity with mechanisms that preserve continuous-valued information, LightSNN provides a viable, energy-efficient forecasting solution particularly suited for deployment in resource-constrained environments such as smart grid systems and edge AI devices, where both computational efficiency and predictive reliability are critical.