This course provides an introduction to theories of neural computation, with an emphasis on the visual system. The goal is to familiarize students with the major theoretical frameworks and models used in neuroscience and psychology, and to provide hands-on experience in using these models. Topics include neural network models, principles of neural coding and information processing, self-organization (learning rules), recurrent networks and attractor dynamics, hierarchical models, and computing with distributed representations.
Instructor: Bruno Olshausen, baolshausen@berkeley.edu, office hours immediately after class
GSI: Sophia Sanborn, sanborn@berkeley.edu
Lectures: Tuesdays & Thursdays 3:30-5, online.
Grading: based on challenge problems (60%) and final project (40%)
- Challenge problems will be posted on Notion. See Instructions and Guidelines here.
- Late challenge problems will not be accepted but your lowest scoring assignment will be dropped at the end of the semester
- Final project guidelines:
- 5 page report + poster or oral presentation at project presentation day (Dec. 10).
- You may work in teams of 3-4 students.
- The project should explore one of the covered in class in more depth, either mathematically or computationally, or it can also be a critical analysis of the prospects for how these approaches can be used to inform our understanding of the brain.
- Some possible project suggestions.
Textbooks:
- [HKP] Hertz, J. and Krogh, A. and Palmer, R.G. Introduction to the theory of neural computation. Amazon
- [DJCM] MacKay, D.J.C. Information Theory, Inference and Learning Algorithms. Available online or Amazon
- [DA] Dayan, P. and Abbott, L.F. Theoretical neuroscience: computational and mathematical modeling of neural systems. Amazon
- [SL] Sterling, P. and Laughlin, S. Principles of Neural Design. MITCogNet
Discussion forum: We have established a Piazza site where students can ask questions or propose topics for discussion.
Recordings: Recordings have been migrating from Zoom to archive.org. Each “recording” link below should open up the recording on archive.org. Recordings are also available here.
Syllabus
Aug. 27: Introduction
- Theory and modeling in neuroscience
- Goals of AI/machine learning vs. theoretical neuroscience
- Intro Lecture slides (+AI neuralnets VSA) and recording
- Reading:
- HKP chapter 1, SL chaper 1
- Dreyfus, H.L. and Dreyfus, S.E. Making a Mind vs. Modeling the Brain: Artificial Intelligence Back at a Branchpoint.
- Bell, A.J. Levels and Loops: the future of artificial intelligence and neuroscience.
- 1973 Lighthill debate on future of AI
Sept. 1: Brains
- Mammalian brain organization
- Insect and spider brains
- Lecture slides and recordings (there are two)
- Reading:
- SL chapters 2-4
- Solari & Stoner (2011) Cognitive Consilience.
- Additional neuroscience background:
- From Neuron to Brain, by Nicholls, et al. (good intro to neuroscience)
- Principles of Neural Science, by Kandel and Schwartz et al. (basic neuroscience textbook)
- Synaptic organization of the Brain, by Gordon Shepard (good overview of neural circuits)
- Ion Channels of Excitable Membranes, by Bertil Hille (focuses on ion channel dynamics)
Sept. 3: Neural mechanisms and models
- Membrane equation, compartmental model of a neuron
- Shunting inhibition, NMDA, dendritic nonlinearities
- Perceptron model
- Lecture slides and recording
- Reading:
- SL chapters 6 (pp. 138-154), and 7
- Mead, C. Chapter 1: Introduction and Chapter 4: Neurons from Analog VLSI and Neural Systems.
- Linear Neuron Models
- Carandini M, Heeger D (1994) Summation and division by neurons in primate visual cortex.
- Background on dynamics, linear time-invariant systems and convolution, and differential equations:
Sept. 8: Signal detection and amplification
- Computing with chemistry/allostery
- Phototransduction
- Lecture slides and recording
- Reading: SL chapters 5, 6, 8
Sept. 10: Physics of computation
- Horizontal cells and lateral inhibition; Optimal pooling by bipolar cells
- Analog VLSI and silicon retina
- Lecture slides and recording
- Reading:
- SL chapter 11
- Mead, C., Chapter 5: Transconductance amplifier from Analog VLSI and Neural Systems.
- Mead, C., Chapter 15: Silicon retina from Analog VLSI and Neural Systems.Reading
- Additional background:
- Thresholding synapses: Sampath & Rieke (2004) paper
- Transistor physics: Mead, C., Chapters 2 & 3 from Analog VLSI and Neural Systems.
- Resistive networks: Mead, C., Chapter 7/Appendix C from Analog VLSI and Neural Systems.
- Semiconductors: Carver Mead talk on “Lessons from the Early Days of Semiconductors”
Sept. 11: Challenge problem 1 due
Sept. 15, 17: Neural coding
- Spiking neuron models
- Signal compression in retina: theory of retinal whitening
- Lecture slides and 9/15 recording, 9/17 recording
- Reading:
- SL chapter 10, 11
- DA chapters 1-4, 5.4
- Mainen & Sejnowski, Reliability of Spike Timing in Neocortical Neurons.
- Eliasmith & Anderson, Temporal representation in spiking neurons (chapter 4)
- Karklin & Simoncelli, Efficient coding of natural images with a population of noisy Linear-Nonlinear neurons
- Further background;
- Spikes: Exploring the Neural Code, by Rieke, Warland, de Ruyter van Stevenick & Bialek
Sept. 22: Retinal tiling
- Cone tiling
- Subdivision of labor by midget and parasol RGC’s
- Foveated sampling
- Lecture slides and recording
- Reading:
- Van Essen & Anderson, Information Processing Strategies and Pathways in the Primate Visual System
- Further research:
- Cheung, Weiss & Olshausen, Emergence of foveal image sampling from learning to attend in visual scenes
Sept. 24: Auditory coding
- Cochlea and auditory nerve
- Time-frequency analysis
- Phase and amplitude coding by spikes
- ICA of natural sound
- Lecture slides and recording
- Reading
- Lecture notes on Sound and the ear, and Auditory information processing
- Olshausen & O’Connor, A New Window On Sound
- Lewicki, Efficient coding of natural sounds
- Sparse coding and ICA
- Auditory demonstrations
- Further background:
- R.F. Lyon, Human and Machine Hearing
Sept. 29: Starburst amacrine cells and the computation of motion (Rowland Taylor)
- Reading
- Vaney, Sivyer & Taylor, Direction selectivity in the retina
- Borst & Helmstaedter, Common circuit design in fly and mammalian motion vision
- Lecture slides and recording
- Further background: Kandel & Schwartz (5th Ed), Ch 26.
Oct. 1: Neural coding in Lateral Geniculate Nucleus (LGN) (Fritz Sommer)
- Reading:
- Koepsell,.. Sommer, Retinal oscillations carry visual information to cortex
- Lecture slides and recording
Oct. 2: Challenge problem 2 due
Oct. 6, 8, 13: Inference
- LGN and Cortex
- Overcomplete representation in primary visual cortex
- Hebbian learning and PCA
- Sparse coding model of V1
- Lecture slides (part 1) and recording (10/6)
- Lecture slides (part 2) and recording (10/8)
- Lecture recording (10/13)
- Reading:
- Olshausen (2013) Perception as an Inference Problem
- Handout on Hebbian learning and PCA
- Foldiak (1990) Forming sparse representations by local anti-hebbian learning
- Handout on Sparse Coding and ICA
- Olshausen & Field (1997) Sparse Coding with an Overcomplete Basis Set: A Strategy Employed by V1?
- Olshausen (2013) Highly overcomplete sparse coding
- Further background:
- Hyvarinen, Hurri & Hoyer Natural Image Statistics
- Olshausen & Field (2005) How close are we to understanding V1?
- Lee & Mumford (2003) Hierarchical Bayesian inference in the visual cortex.
Oct. 15, 20: Organization, topography
- Horizontal connections
- Self-organizing maps
- Manifold models
- Lecture slides and 10/15 recording
- Lecture 10/20 recording
- Reading:
- HKP chapter 9, DA chapter 8
- Hyvarinen, Hoyer, Inki, Topographic Independent Component Analysis
- Roweis & Saul, Nonlinear Dimensionality Reduction by Locally Linear Embedding
- Chen, Paiton, Olshausen, The sparse manifold transform
- Field (1993), Contour Integration by the Human Visual System: Evidence for a Local “Association Field”
- Bosking,… Fitzpatrick (1997), Orientation Selectivity and the Arrangement of Horizontal Connections in Tree Shrew Striate Cortex
Oct. 20: Challenge problem 3 due
Oct. 22, 27: Attractor dynamics
- Hopfield networks, memories as ‘basis of attraction’
- Line attractors and `bump circuits’
- Lecture slides and recording (10/22)
- Lecture recording (10/27)
- Reading:
- HKP Chapter 2 and 3 (sec. 3.3-3.5), 7 (sec. 7.2-7.3), DJCM chapter 42, DA chapter 7
- Handout on attractor networks – their learning, dynamics and how they differ from feed-forward networks
- Hopfield (1982)
- Hopfield (1984)
- Additional background:
- Willshaw (1969)
- Marr-Poggio stereo algorithm
- Kechen Zhang paper on bump circuits
Oct. 29: Probabilistic models
- Learning and inference in generative models
- Boltzmann machines
- Restricted Boltzmann machines and Energy-based models
- Lecture slides and recording
- Reading:
- HKP chapter 7 (sec. 7.1),DJCM chapter 1-3, 20-24,41,43, DA chapter 10
- A probability primer
- Bayesian probability theory and generative models
- Mixture of Gaussians model
- Additional background:s
- D.J.C. MacKay, Bayesian Methods for Adaptive Models (Ph.D. Thesis)
- Crick and Mitchison theory on ‘unlearning’ during sleep – paper
- Application of Boltzmann machines to neural data analysis:
- E. Schneidman, M.J. Berry, R. Segev and W. Bialek,Weak pairwise correlations imply strongly correlated network states in a neural population, Nature 4400 (7087) (2006),
- J. Shlens, G.D. Field, J.L. Gauthier, M.I. Grivich, D. Petrusca, A. Sher, A.M. Litke and E.J. Chichilnisky, The structure of multi-neuron firing patterns in primate retina, J Neurosci 260 (32) (2006), pp. 8254-8266.
- U. Koster, J. Sohl-Dickstein, C.M. Gray, B.A. Olshausen, Modeling higher-order correlations within Cortical Microcolumns, PLOS Computational Biology, July 2014.
Nov. 3: Inference – denoising, dynamical models
- Denoising with a sparse coding prior
- Dynamical models
- Lecture slides and recording
- Reading:
- Simoncelli & Adelson paper on Bayesian wavelet coring
- Robbie Jacobs’ notes on Kalman filter
- kalman.m demo script
- Greg Welch’s tutorial on Kalman filter
- Further background:
- Dynamic texture models
- Kevin Murphy’s HMM tutorial
Nov. 5, 10: Hierarchical models and invariance
- Learning of higher-order structure, part-whole relationships
- Models of invariance
- Three-way interactions/dynamic routing
- Complex cells, power spectrum vs. bispectrum
- Lecture slides and recording
- Sophia 11/10 lecture slides and recording
- Reading:
- Fukushima (1980), Neocognitron
- Nguyen, Yosinski, Clune (2014), Deep Neural Networks are Easily Fooled
- Hinton and Salakhutdinov, ‘deep belief networks’ – paper
- Lee et al. (2009), convolutional deep belief networks
- Hinton (1981) – A parallel computation that assigns canonical object-base frames of reference
- Olshausen (1993) – Dynamic routing model
- Tenenbaum & Freemen (2000) – Bilinear models
Nov. 6: Challenge problem 4 due
Nov. 12 – Project mini-presentations
Nov. 17: Perception-action loop
- Active perception
- Representations for perception vs. action
- Thalamocortical loop as sensorimotor loop
- Lectures slides and recording
- Reading:
- O’Regan & Noë (2001) – A sensorimotor account of vision and visual consciousness
- Philipona, O’Regan & Nadal (2003 – Is there something out there?
- Guillery & Sherman (2011) – Branched thalamic afferents
- Olshausen (2012) – 20 years of learning about vision: Questions answered, Questions unanswered, and Questions not yet asked.
Nov. 19: Supervised learning (and cerebellum)
- Delta learning rule and back-propagation
- Supervised learning in cerebellum
- Kanerva’s Sparse Distributed Memory Model
- Lecture slides [supervised learning, cerebellum and SDM] and recording
- Reading:
- HKP chapters 5-6, DJCM chapters 38-40, 44, DA chapter 8 (sec. 4-6)
- Supervised learning in single-stage feedforward networks
- Supervised learning in multi-layer feedforward networks – “back propagation”
- Kanerva, Sparse Distributed Memory and related models
Nov. 24: Reinforcement learning and basal ganglia (Sophia)
Nov. 26: — Thanksgiving holiday —
Dec. 1, 3: Hyperdimensional computing
- Local vs. distributed representation
- Binding, bundling and sequencing
- Computing in superposition
- Lecture (Dec. 1) slides and recording
- Lecture (Dec. 3) slides (Olshausen, Kent/Maudgalya) and recording
- Reading:
- Plate: Holographic Reduced Representations
- Gayler: Vector Symbolic Architectures
- Kanerva: Hyperdimensional computing
- Joshi, Halseth, Kanerva: Language geometry using random indexing
- Frady, Kleyko, Sommer: Sequence indexing and working memory
- Frady et al., Resonator Networks, 1
- Kent et al., Resonator Networks, 2
- Other resources:
- HD computing/VSA website (in progress)
- VSA online seminar series
Dec. 8: Neural oscillations and synchrony
- Binding by synchrony
- Coupled oscillator networks
- Computing with waves
- Lecture slides and recording
- Reading:
- Wang & Roychowdhury, Oscillator-based Ising Machines
- Frady & Sommer, Threshold phasor associative memory
- Vadlamani, Xiao & Yablonovitch, Physics successfully implements Lagrange multiplier optimization
- Feynman, Chapter 26, Optics: The principle of least time
- Further background:
- Koepsell, Wang, Hirsch & Sommer, Exploring the function of neural oscillations in early sensory systems
- Agarwal, et al., Spatially distributed local field potentials in the hippocampus encode rat position
Dec. 10: Project presentations