This course provides an introduction to theories of neural computation, with an emphasis on the visual system. The goal is to familiarize students with the major theoretical frameworks and models used in neuroscience and psychology, and to provide hands-on experience in using these models. Topics include neural network models, principles of neural coding and information processing, self-organization (learning rules), recurrent networks and attractor dynamics, probabilistic models, and computing with distributed representations.

**Instructor**: Bruno Olshausen, baolshausen@berkeley.edu, office hours immediately after class

**GSIs**:

- Alex Belsten, belsten@berkeley.edu (OH: W 9-10:30, Evans 560)
- Chris Kymn, cjkymn@berkeley.edu (OH: Tu 9:30-11, Evans 567)

**Lectures**: Tuesdays & Thursdays 3:30-5, Anthropology & Art Practice Building, Room 155

**Grading**: Based on problem sets (60%) and final project (40%)

- Problem sets will be posted on this webpage and should be submitted via bCourses.
- Late problem sets will be penalized 10% per day.
- Your lowest scoring problem set will be dropped at the end of the semester.
- You may work in small groups (2-3) on the problem sets but are responsible for submitting individually.
- Final project guidelines (more details to come):
- 5 page report + poster or oral presentation at project presentation day (early December).
- You may work in teams of 3-4 students.
- The project should explore one of the covered in class in more depth, either mathematically or computationally, or it can also be a critical analysis of the prospects for how these approaches can be used to inform our understanding of the brain.

**Textbooks**:

- [
**HKP**] Hertz, J. and Krogh, A. and Palmer, R.G.*Introduction to the theory of neural computation.*Amazon - [
**DJCM**] MacKay, D.J.C.*Information Theory, Inference and Learning Algorithms.*Available online or Amazon - [
**DA**] Dayan, P. and Abbott, L.F.*Theoretical neuroscience: computational and mathematical modeling of neural systems.*Amazon - [
**SL**] Sterling, P. and Laughlin, S.*Principles of Neural Design*. MITCogNet

**Discussion forum**: We have established a Piazza site where students can ask questions or propose topics for discussion.

**Final project suggestions**: This document provides some suggestions to get you thinking about a topic for your final project.

**Project presentation day schedule**

**Topic and Assignment Schedule**

The first ten weeks are subdivided into five topic/problem set modules, each spanning two-weeks. The remaining five weeks are devoted to the final project.

Topic |
Assignment |
Release Date |
Due Date |

1. Biophysics of computation |
Problem Set 1 | Dataset | Aug. 30 | Sep. 8 |

2. Sensory coding |
Problem Set 2 | Dataset | Sep. 13 | Sep. 22 |

3. Representation learning |
Problem Set 3 | Dataset | Sep. 27 | Oct. 11 |

4. Attractor networks and probabilistic models |
Problem Set 4 | Dataset | Oct. 13 | Oct. 26 |

5. Computing in distributed representation |
Problem Set 5 | Oct. 25 | Nov. 8 |

– | Final Project Proposal | Nov. 10 | |

– |
Final Project Presentation | Dec. 8 (tentative) | |

– | Final Project Writeup | Dec. 13 (tentative) |

**Syllabus**

**Course intro: Welcome/Course logistics | Aug. 25**

**Topic 0: Introduction to Computational Neuroscience | Aug. 30**

- Theory and modeling in neuroscience
- Goals of AI/machine learning vs. theoretical neuroscience
- Reading:
- Dreyfus, H.L. and Dreyfus, S.E. Making a Mind vs. Modeling the Brain: Artificial Intelligence Back at a Branchpoint.
- Mitchell, M. Why AI is harder than we think

- Supplemental reading:
- Additional neuroscience background:
- From Neuron to Brain, by Nicholls, et al. (good intro to neuroscience)
- Principles of Neural Science, by Kandel and Schwartz et al. (basic neuroscience textbook)
- Synaptic organization of the Brain, by Gordon Shepard (good overview of neural circuits)
- Ion Channels of Excitable Membranes, by Bertil Hille (focuses on ion channel dynamics)

- Lecture slides

**Topic 1a: Biophysics of computation | Sept. 1, 6**

- Passive membrane / RC circuit
- Basic neural models
- Perceptron
- Leaky Integrate-and-Fire
- Linear-Nonlinear Poisson

- Reading:
**SL**chapters 6 (pp. 138-154), and 7**DA,**Chapter 5.1-5.6**HKP**, Chapter 5- Mead, C. Chapter 1: Introduction and Chapter 4: Neurons from
*Analog VLSI and Neural Systems*. - Handout on Linear Neuron Models
- Supervised learning in single-stage feedforward networks

- Additional background:
- Dynamics with differential equations
- Simulating differential equations
- Carandini M, Heeger D (1994) Summation and division by neurons in primate visual cortex.

- Lecture slides

**Topic 1b: Physics of computation | Sept. 8**

- Transconductance amplifier
- Analog VLSI and silicon retina
- Reading:
- Mead, C., Chapter 5: Transconductance amplifier from
*Analog VLSI and Neural Systems*. - Mead, C., Chapter 15: Silicon retina from
*Analog VLSI and Neural Systems*.

- Mead, C., Chapter 5: Transconductance amplifier from
- Additional background:
- Transistor physics: Mead, C., Chapters 2 & 3 from
*Analog VLSI and Neural Systems*. - Resistive networks: Mead, C., Chapter 7/Appendix C from
*Analog VLSI and Neural Systems*. - Semiconductors: Carver Mead talk on “Lessons from the Early Days of Semiconductors”

- Transistor physics: Mead, C., Chapters 2 & 3 from
- Lecture slides

**Topic 2a: Sensory coding | Sept. 13**

- Phototransduction
- Signal detection and optimal pooling
- Reading:
**SL**chapter 8, 11- Sampath & Rieke (2004) paper on thresholding synapses

- Inner-life of the cell video
- Lecture slides

**Topic 2b: Neural encoding and decoding | Sept. 15**

- Spiking neuron models
- Leaky-integrate and fire (LIF) model
- Linear-nonlinear Poisson (LNP) model
- Oscillations and synchrony
- Reading:
**SL**chapter 10**DA**chapters 1-4, 5.4- Mainen & Sejnowski, Reliability of Spike Timing in Neocortical Neurons.
- Eliasmith & Anderson, Temporal representation in spiking neurons (chapter 4)
- Koepsell,.. Sommer, Retinal oscillations carry visual information to cortex

- Further background;
- Spikes: Exploring the Neural Code, by Rieke, Warland, de Ruyter van Stevenick & Bialek

- Lecture slides

**Topic 2c: Efficient coding | Sept. 20**

- Signal compression in retina: theory of whitening
- Tiling, subdivision of labor by different cell classes
- Reading
- Karklin & Simoncelli, Efficient coding of natural images with a population of noisy Linear-Nonlinear neurons
- Van Essen & Anderson, Information Processing Strategies and Pathways in the Primate Visual System

- Further research:
- Chichilinsky paper
- Cheung, Weiss & Olshausen, Emergence of foveal image sampling from learning to attend in visual scenes

**Topic 2d: Auditory coding | Sept. 22**

- Cochlea and auditory nerve
- Time-frequency analysis
- Phase and amplitude coding by spikes
- ICA of natural sound
- Reading
- Lecture notes on Sound and the ear, and Auditory information processing
- Olshausen & O’Connor, A New Window On Sound
- Lewicki, Efficient coding of natural sounds
- Sparse coding and ICA

- Auditory demonstrations
- Further background:
- R.F. Lyon, Human and Machine Hearing

- Lecture slides

**Topic 3: Representation learning | Sept. 27 – Oct. 6**

- Hebbian learning and PCA
- Winner-take-all learning
- Sparse Coding and ICA
- Slow Feature Analysis (SFA)
- Topographic organization, self-organizing maps
- Manifold learning
- Reading:
- HKP chapters 8,9
- DA Chapter 10
- Handout on Hebbian learning and PCA
- Foldiak (1990) Forming sparse representations by local anti-hebbian learning
- Olshausen & Field (1996) Sparse coding paper
- Wiskott & Sejnowski, Slow feature analysis
- Hyvarinen, Hoyer, Inki, Topographic Independent Component Analysis
- Roweis & Saul, Nonlinear Dimensionality Reduction by Locally Linear Embedding
- Chen, Paiton, Olshausen, The sparse manifold transform

- Lecture slides (9/27, 29 – linear Hebbian learning and PCA, competitive learning, Foldiak model)
- Lecture slides (10/4 – sparse coding)
- Lecture slides (10/6,11 – maps and manifolds)

**Topic 4a: Attractor dynamics | Oct. 13, 18**

- Hopfield networks
- Continuous Attractor Networks
- Reading:
**HKP**Chapter 2 and 3 (sec. 3.3-3.5), 7 (sec. 7.2-7.3)**DJCM**chapter 42**DA**chapter 7- Handout on attractor networks
- Hopfield (1982)
- Hopfield (1984)
- Kechen Zhang paper on bump circuits

- Lecture slides
- Chris Kymn Lecture slides (Ring attractor)

**Topic 4b: Probabilistic models | Oct. 20 -25**

- Perception as inference
- Boltzmann machine
- Bayesian inference
- Sparse coding and ICA
- Dynamical models (Kalman filter)
- Reading:
**HKP**chapter 7.1**DJCM**chapters 1-3, 20-24, 28, 41,43**DA**chapter 10- Mumford, Neuronal architectures for pattern theoretic problems.
- Yuille & Kersten, Vision as Bayesian inference: analysis by synthesis?
- Olshausen (2013) Perception as an Inference Problem
- A probability primer
- Bayesian probability theory and generative models
- Sparse Coding and ICA
- Hinton & Sejnowski, Learning and Relearning in Boltzmann Machines

- Additional reading:
- Simoncelli & Adelson paper on Bayesian wavelet coring
- Weiss & Simoncelli, Motion illusions as optimal percepts
- Koster et al., Modeling higher-order correlations within cortical micro columns
- Robbie Jacobs’ notes on Kalman filter
- kalman.m demo script
- Dynamic texture models

- Lecture slides 10/20 – probabilistic/generative models
- Lecture slides 10/25 – Bayesian inference
- Lecture slides 10/25 – Boltzmann machines

**Topic 5: Computing with distributed representations | Oct. 27 – Nov. 3**

- Learning distributed representations via backpropagation
- Vector Symbolic Architectures (VSA) / Hyperdimensional Computing
- HD Algebra
- Vector Function Architectures (VFA)
- Resonator networks for vector factorization
- Sparse, distributed memory (SDM)
- Reading:
- Supervised learning in multi-layer feedforward networks – “back propagation”
- Kanerva: Hyperdimensional computing
- Kleyko et al., Vector Symbolic Architectures as a Framework for Emerging Hardware
- Joshi, Halseth, Kanerva: Language geometry using random indexing
- Frady et al., Resonator Networks, 1

- Other resources:
- HD computing/VSA website (in progress)
- VSA online seminar series

- Lecture slides – supervised learning via backpropagation
- Lecture slides – HD Computing intro
- Lecture slides – Chris Kymn (11/1)
- Lecture slides – HDC visual scene analysis and factorization
- animation of resonator network

- Lecture slides – Chris Kymn (11/8)

**Topic 6: Advanced topics | Nov. 8 – Dec. 1**

- Nov. 8 – VFA and models of hippocampus (Chris Kymn)
- Nov. 15 – Sparse, distributed memory and the cerebellum (Pentti Kanerva)
- Nov. 17 – Perception-action loop (Fritz Sommer)
- Olshausen lecture slides
- O’Regan & Noë (2001) – A sensorimotor account of vision and visual consciousness
- Philipona, O’Regan & Nadal (2003 – Is there something out there?
- Sommer lecture slides on theory of Predictive Information Gain (PIG)
- Little & Sommer (2013) Learning and exploration in action-perception loops.
- Mobin, Arnemann & Sommer (2014) Information-based learning by agents in unbounded state spaces.

- Nov. 22 – no class
- Nov. 29 – Hierarchical models and equilibrium propagation (Olshausen)
- Fukushima (1980), Neocognitron
- Hinton (1981) – A parallel computation that assigns canonical object-base frames of reference
- Olshausen (1993) – Dynamic routing model
- Scellier (2020) – A deep learning theory for neural networks grounded in physics

- Dec. 1 – Coupled-oscillator models (Connor Bybee)
- Izhikevich, Resonate-and-Fire Neurons
- Frady & Sommer, Robust computation with rhythmic spike patterns