Recent results in information theory show how error-correcting codes on large, sparse graphs (“expander graphs”) can leverage multiple weak constraints to produce near-optimal performance. I will demonstrate a mapping between these error-correcting codes and canonical models of neural memory (“Hopfield networks”), and use expander code ideas to construct Hopfield networks that have a number of states exponential in network size while robustly correcting errors in a finite fraction of nodes. I will also suggest how such networks can be used to robustly assign and retrieve labels for very large numbers of input patterns.
The network architectures exploit generic properties of large, distributed systems and map naturally to neural dynamics, suggesting appealing theoretical frameworks for understanding computation in the brain. Moreover, they suggest a computational explanation for the observed sparsity in neural responses in many cognitive brain areas. These results thus link powerful error-correcting frameworks to neuroscience, providing insight into principles that neurons might use and potentially offering new ways to interpret experimental data.