This course provides an introduction to theories of neural computation, with an emphasis on the visual system. The goal is to familiarize students with the major theoretical frameworks and models used in neuroscience and psychology, and to provide hands-on experience in using these models. Topics include neural network models, principles of neural coding and information processing, self-organization (learning rules), recurrent networks and attractor dynamics, dynamical systems, probabilistic models, and computing with distributed representations.
Instructor: Bruno Olshausen, baolshausen@berkeley.edu, office hours immediately after class
GSI: Galen Chuang, galenc@berkeley.edu, office hours Wednesdays 5-6, Warren 205A
Lectures: Tuesdays & Thursdays 3:30-5, Warren 205A
Grading: Based on problem sets (60%), final project (30%), and class participation (10%)
- Problem sets will be posted on this webpage and should be submitted via bCourses.
- Problem sets are due before class on the due date, no exceptions. To accommodate difficult situations, your lowest scoring problem set will be dropped at the end of the semester.
- You may work in small groups (2-3) on the problem sets but are responsible for submitting individually.
- Final project guidelines (more details to come):
- 5 page report + poster or oral presentation at project presentation day (early December).
- You may work in teams of 3-4 students.
- The project should explore one of the covered in class in more depth, either mathematically or computationally, or it can also be a critical analysis of the prospects for how these approaches can be used to inform our understanding of the brain.
- Some possible suggestions for final project.
Textbooks:
- [HKP] Hertz, J. and Krogh, A. and Palmer, R.G. Introduction to the theory of neural computation. Amazon
- [DJCM] MacKay, D.J.C. Information Theory, Inference and Learning Algorithms. Available online or Amazon
- [DA] Dayan, P. and Abbott, L.F. Theoretical neuroscience: computational and mathematical modeling of neural systems. Amazon
- [SL] Sterling, P. and Laughlin, S. Principles of Neural Design. MITCogNet
Discussion forum: We have established an Ed site where students can ask questions or propose topics for discussion.
Topic and Assignment Schedule
The first ten weeks are subdivided into six topic modules and five problem sets. The remaining five weeks are devoted to the final project.
Topic | Assignment | Release Date | Due Date |
1. Animal behavior and brain organization | |||
2. Sensory coding | Problem Set 1 | Colab | Sept. 5 | Sept. 12 |
3. Biophysics of computation and neural coding | Problem Set 2 | Colab+kernel | Sept. 19 | Oct. 1 |
4. Representation learning | Problem Set 3 | Colab | Oct. 1 | Oct. 15 |
5. Attractor networks and probabilistic models | Problem Set 4 | Colab, Part 2 weights | Oct. 15 | Oct. 29 |
6. Computing in distributed representation | Problem Set 5 | Oct. 29 | Nov. 8 |
– | Final Project Proposal | Nov. 12 | |
– | Final Project Presentation | Dec. 12 (tentative) | |
– | Final Project Writeup | Dec. 19 |
Syllabus
Course intro: Course logistics, What this course is about. | Aug. 29
- Reading:
- Dreyfus, H.L. and Dreyfus, S.E. Making a Mind vs. Modeling the Brain: Artificial Intelligence Back at a Branchpoint.
- Mitchell, M. Why AI is harder than we think
- Supplemental reading:
- Additional neuroscience background:
- From Neuron to Brain, by Nicholls, et al. (good intro to neuroscience)
- Principles of Neural Science, by Kandel and Schwartz et al. (basic neuroscience textbook)
- Synaptic organization of the Brain, by Gordon Shepard (good overview of neural circuits)
- Ion Channels of Excitable Membranes, by Bertil Hille (focuses on ion channel dynamics)
- Lecture slides
Topic 1: Animal behavior, What are brains for? | Sept. 3
- Reading:
- SL chapters 2-4
- Lewicki, Olshausen, Surlykke & Moss (2014) Scene analysis in the natural environment.
- Lecture slides
Topic 2a: Sensory coding – vision | Sept. 5
- Phototransduction
- Signal detection and optimal pooling
- Reading:
- SL chapter 8, 11
- DA chapter 3.2
- Sampath & Rieke (2004) paper on thresholding synapses
- Inner-life of the cell video
- Lecture slides | recording
Topic 2b: Sensory coding – audition | Sept. 10
- Cochlea and auditory nerve
- Time-frequency analysis
- Reading
- SL chapter 10, pp. 268-273
- Lecture notes on Sound Ear, and Auditory Information Processing
- Olshausen & O’Connor, A New Window On Sound
- Lewicki, Efficient coding of natural sounds
- Auditory demonstrations
- Further background:
- R.F. Lyon, Human and Machine Hearing
- Lecture slides
Topic 3a: Biophysics of computation | Sept. 12, 17
- Passive membrane / RC circuit
- Spiking neuron models
- Leaky Integrate-and-Fire
- Linear-Nonlinear Poisson
- Reading:
- SL chapters 6 (pp. 138-154), and 7
- DA, Chapter 5.1-5.6
- Mead, C. Chapter 1: Introduction and Chapter 4: Neurons from Analog VLSI and Neural Systems.
- Handout on Linear Neuron Models
- Additional background:
- Dynamics with differential equations
- Simulating differential equations
- Carandini M, Heeger D (1994) Summation and division by neurons in primate visual cortex.
- Sept. 12 Lecture slides | recording
- Sept. 17 Lecture slides (spikes) | recording part 1, part 2
Topic 3b: Neural encoding and decoding | Sept. 19
- Encoding and decoding models
- Phase and amplitude coding by spikes
- Oscillations and synchrony
- Reading:
- SL chapter 10
- DA chapters 1-4, 5.4
- Mainen & Sejnowski, Reliability of Spike Timing in Neocortical Neurons.
- Eliasmith & Anderson, Temporal representation in spiking neurons (chapter 4)
- Koepsell,.. Sommer, Retinal oscillations carry visual information to cortex
- Further background;
- Spikes: Exploring the Neural Code, by Rieke, Warland, de Ruyter van Stevenick & Bialek
- Lecture slides | recording part 1, part 2
Topic 3c: Efficient coding | Sept. 24
- Signal compression in retina: theory of whitening
- Tiling, subdivision of labor by different cell classes
- Reading
- Karklin & Simoncelli, Efficient coding of natural images with a population of noisy Linear-Nonlinear neurons
- Van Essen & Anderson, Information Processing Strategies and Pathways in the Primate Visual System
- Further research:
- Chichilinsky paper
- Lecture slides | recording part 1, part 2
Topic 3d: Physics of computation | Sept. 26
- Transconductance amplifier
- Analog VLSI and silicon retina
- Reading:
- Mead, C., Chapter 5: Transconductance amplifier from Analog VLSI and Neural Systems.
- Mead, C., Chapter 15: Silicon retina from Analog VLSI and Neural Systems.
- Additional background:
- Transistor physics: Mead, C., Chapters 2 & 3 from Analog VLSI and Neural Systems.
- Resistive networks: Mead, C., Chapter 7/Appendix C from Analog VLSI and Neural Systems.
- Semiconductors: Carver Mead talk on “Lessons from the Early Days of Semiconductors”
- Lecture slides | recording part 1 (wrap up of efficient coding from Sept. 26)
Topic 4: Representation learning | Oct. 1-10
- Supervised learning
- Hebbian learning and PCA
- Winner-take-all learning
- Sparse Coding
- Slow Feature Analysis (invariance)
- Manifold learning
- Reading:
- HKP chapters 5,8,9
- DA Chapter 10
- Handout on Supervised learning in single-stage feedforward networks
- Handout on Hebbian learning and PCA
- Foldiak (1990) Forming sparse representations by local anti-hebbian learning
- Olshausen & Field (1996) Sparse coding paper
- Wiskott & Sejnowski, Slow feature analysis
- Roweis & Saul, Nonlinear Dimensionality Reduction by Locally Linear Embedding
- Chen, Paiton, Olshausen, The sparse manifold transform
- Lecture slides (Oct. 1, 3) | recording (perceptron learning) (Oct. 1) | recording (Hebbian learning/PCA) (Oct. 3)
- Lecture slides (Oct. 8) | recording part 1 (winner-take-all learning), part 2 (sparse coding)
- Lecture slides (Oct. 10) | recording (continuation of sparse coding)
- Oct. 15 recording (manifold learning)
Topic 5a: Attractor dynamics | Oct. 15, 17
- Hopfield networks
- Continuous Attractor Networks
- Reading:
- HKP Chapter 2 and 3 (sec. 3.3-3.5), 7 (sec. 7.2-7.3)
- DJCM chapter 42
- DA chapter 7
- Handout on attractor networks
- Hopfield (1982)
- Hopfield (1984)
- Kechen Zhang paper on bump circuits
- Lecture slides (Oct. 15)
- Lecture slides and recording (Oct. 17)
Topic 5b: Probabilistic models | Oct. 22, 24
- Perception as inference
- Boltzmann machine
- Bayesian inference
- Sparse coding and ICA
- Dynamical models (Kalman filter)
- Reading:
- HKP chapter 7.1
- DJCM chapters 1-3, 20-24, 28, 41,43
- DA chapter 10
- Mumford, Neuronal architectures for pattern theoretic problems.
- Yuille & Kersten, Vision as Bayesian inference: analysis by synthesis?
- Olshausen (2013) Perception as an Inference Problem
- A probability primer
- Bayesian probability theory and generative models
- Sparse Coding and ICA
- Hinton & Sejnowski, Learning and Relearning in Boltzmann Machines
- Additional reading:
- Simoncelli & Adelson paper on Bayesian wavelet coring
- Weiss & Simoncelli, Motion illusions as optimal percepts
- Koster et al., Modeling higher-order correlations within cortical micro columns
- Robbie Jacobs’ notes on Kalman filter
- kalman.m demo script
- Dynamic texture models
- Lecture slides (Oct. 22)
- Lecture slides (Oct. 24): Bayesian Inference, Boltzmann Machine | recording (Boltzmann machine)
Topic 6: Computing with distributed representations | Oct. 29 – Nov. 7
- Vector Symbolic Architectures (VSA) / Hyperdimensional Computing
- HD Algebra
- Vector Function Architectures (VFA) / Fractional power encoding (FPE)
- Resonator networks for vector factorization
- Residue Hyperdimensional computing
- Reading:
- Kanerva: Hyperdimensional computing
- Kleyko et al., Vector Symbolic Architectures as a Framework for Emerging Hardware
- Joshi, Halseth, Kanerva: Language geometry using random indexing
- Frady et al., Resonator Networks, 1
- Kymn et al., Residue Hyperdimensional Computing
- Other resources
- Neuroscience 299 fall 2021 course on Computing with High-Dimensional Vectors
- Tony Plate thesis
- HD computing/VSA website
- VSA online seminar series
- Lecture slides (Oct. 29) | Recording: Boltzmann machine, HD Computing
- Lecture slides (Oct. 31) | Recording: Langevin sampling, HD Computing (continued)
- Lecture slides (Nov. 5)
Advanced topics | Nov. 12 – Dec. 5
- Nov. 12: Invariant object recognition via factorization of form and motion, slides | recording
- Renner… Frady, et al. (2024) Neuromorphic visual scene understanding with resonator networks.
- Kymn, Mazelet, et al. (2024) Compositional factorization of visual scenes with convolutional sparse coding and resonator networks.
- Nov. 13: Sparse Distributed Memory and cerebellum (Kanerva),
slides | zoom recording (passcode: Aq!9?5i5)- Sparse Distributed Memory and related models (Note: Wednesday, Nov. 13 at 3:30)
- Nov. 19: Coupled-oscillator models (Bybee)
- Izhikevich, Resonate-and-Fire Neurons
- Frady & Sommer, Robust computation with rhythmic spike patterns
- Wang & Roychowdhury, Oscillator-based Ising machines for solving combinatorial optimization problems
- Bybee et al., Efficient optimization with higher-order Ising machines
- slides
- Nov. 21: Optimization with dynamical systems (Bybee)
- Nov. 26: Self-organizing maps, cortical maps and plasticity
- Hyvarinen, Hoyer, Inki, Topographic Independent Component Analysis
- slides | recording (media gallery on bCourses)
- Dec. 3, 5: Perception-action loop
- Little & Sommer (2013) Learning and exploration in action-perception loops.
- O’Regan & Noë (2001) – A sensorimotor account of vision and visual consciousness
- Philipona, O’Regan & Nadal (2003) – Is there something out there?
- Anderson et al. (2020) – High acuity vision from retinal image motion
- Cheung, Weiss & Olshausen (2017) – Emergence of foveal image sampling from learning to attend in visual scenes
- Lecture slides | recording
- Dec. 5: Learning in hierarchical models – equilibrium propagation
- Scellier (2020) – A deep learning theory for neural networks grounded in physics