This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, reproduction and adaptation in any medium and for any purpose provided that it is properly attributed. For attribution, the original author(s), title, publication source (PeerJ Preprints) and either DOI or URL of the article must be cited.
Sparse coding is an effective operating principle for the brain, one that can guide the discovery of features and support the learning of assocations. Here we show how spiking neurons with discrete dendrites can learn sparse codes via an online, nonlinear Hebbian rule based on the concept of somato-dendritic mismatch. The rule gives lateral inhibition direct control over the selectivity of dendritic receptive fields, without the need for a sliding threshold. The network discovers independent components that are similar to the features learned by a sparse autoencoder. This improves the linear decodability of the input: combined with a linear readout, our single-layer network performs as well as a deeper multi-layer Perceptron on the MNIST dataset. It can also produce topographic feature maps when the lateral connections are organised in a center-surround pattern, although this does not improve the quality of the encoding.