State transitions unlock temporal memory in swarm-based reservoir computing
Abstract
Swarms offer a compelling substrate for reservoir computing, where agents interact through local rules while continuously rewiring their effective connectivity. We revisit swarm-based reservoirs with a focus on temporal memory, and the impact of adding a simple internal state system to agents. Rather than emphasizing single-task forecasting, our contribution is a clear, reproducible characterization of the swarm reservoir’s temporal memory and its scaling behavior, together with a practical implementation recipe compatible with GPU acceleration. This positions multi-agent collectives as physically embodied alternatives to canonical neural reservoirs and clarifies when and why they are likely to be useful. Under a pure Memory-Capacity (MC) protocol (linear readout, no polynomial expansion), introducing a second internal state with rule-based transitions increases memory by two orders of magnitude compared to single-state swarms (e.g., at swarm size N=1600: MC >20 vs ≈ 0.1), establishing state transitions as a simple yet essential extension for temporal memory. With transitions present, the swarm’s total MC then scales linearly with population over N=800–2000 (MC ≈ 0.0123 · N + 1.61), robust to moderate process noise; a pure-MC validation confirms the same slope (MC ≈ 0.0122 · N + 1.64), indicating the effect is intrinsic to the swarm dynamics rather than a post-processing artifact. For context, a canonical neural reservoir exhibits the expected increase of memory with dimensionality. Finally, one-step chaotic prediction reveals a trade-off: single-state swarms excel at instantaneous prediction while multi-state swarms excel at temporal memory.