The brain computes with spikes, and spikes are costly to metabolism. Yet, the spiking responses of most cortical cells appear extremely noisy, to the extent that the only feature that repeats from trial to trial is the firing rate (and only for the small minority of cells that are modulated at all). If that was the case, the most sophisticated of central nervous system, the mammal brain, would be one of the less efficient and reliable in the animal kingdom. Another prominent feature of cortical networks is that they maintain a tight balance between excitation and inhibition. This balance can result in high spike train variability because of chaotic dynamics, however this does not solve the conundrum of “why” circuits are organized this way.
Here, we will strongly challenge this view and show that networks that learn E/I balance converge to state were they represent their stimuli most efficiently, i.e. they achieve the highest possible accuracy and robustness given the number of spikes they spend. Neural responses are highly variable, not because they are noisy, but because the population codes are highly degenerate: Many different spiking patterns are equivalent in terms of coding. Finally, borrowing concepts from control theory, we will show that these networks can learn any dynamical systems. For example, they can learn to integrate sensory input, memorize, make decisions, control motor effectors, etc. The number of spikes and population size required to perform these tasks are order of magnitude lower than rate-based models, including “neuroengineering” approaches.
Our framework suggest that the brain is not noisy, but degenerate because it must combine high capacity and high robustness, at a reasonable metabolic cost. It has huge implications for the neural code, but also for the optimization of neuromorphic systems.