New work from the Redwood Center has recently been released in preprint form across a variety of different topics — we encourage you to check out the following preprints which are in various stages of submission:
- Robust computation with rhythmic spike patterns (Paxon Frady & Fritz Sommer)
- This model develops a theory of associative memory that stores patterns in stable periodic states of precisely timed spiking activity.
- Numerically recovering the critical points of a deep linear autoencoder (Charles Frye, Neha Wadia, Mike DeWeese, & Kris Bouchard)
- This paper examines the problem of numerical critical point identification in a deep linear autoencoder as a step toward understanding the training and generalization performance of artificial neural networks.
- Superposition of many models into one (Brian Cheung, Alex Terekhov, Yubei Chen, Pulkit Agrawal, & Bruno Olshausen)
- This paper proposes a way to train neural networks to perform many different tasks and mitigate the problem of ‘catastrophic forgetting’ via a scheme for superposing model parameters that correspond to each problem.
- Replay as wavefronts and theta sequences as bump oscillations in a grid cell attractor network (Louis Kang & Mike DeWeese)
- This proposes an experimentally-supported model for grid cells that naturally enables them to participate in firing sequences that encode rapid trajectories in space.
- Neural Empirical Bayes (Saeed Saremi & Aapo Hyvärinen)
- A technique for unsupervised learning that combines kernel density estimation and empirical Bayes.
Questions and/or comments are welcomed by the authors; thanks for reading! The papers are listed on our publications page.