Course page

VS265: Neural Computation - Fall 2018

This course provides an introduction to the theory of neural computation. The goal is to familiarize students with the major theoretical frameworks and models used in neuroscience and psychology, and to provide hands-on experience in using these models. Topics include neural network models, supervised and unsupervised learning, associative memory models, recurrent networks, probabilistic/graphical models, and models of neural coding in the brain.

Instructor:  Bruno Olshausen, 570 Evans, office hours immediately after class

GSI:  Shariq Mobin, 560 Evans, office hours 2:30 – 4:00 pm on Mondays.

Lectures:  Tuesdays & Thursdays 3:30-5, 170 Barrows.

Grading:  based on weekly assignments (60%) and final project (40%)

  • Late homework will not be accepted but your lowest homework score will be dropped at the end of the semester
  • Each homework will be graded holistically out of 3 points:
    • 3 – problems were done correctly aside from minor errors
    • 2 – problems were attempted but some portions or concepts were missed
    • 1 – something relevant was done but no clear direction or mostly incomplete
    • 0 – problems were not attempted
  • You have 1 week to dispute your grade from the time they are released on bCourses

Textbooks:

  • [HKP] Hertz, J. and Krogh, A. and Palmer, R.G. Introduction to the theory of neural computation. Amazon
  • [DJCM] MacKay, D.J.C. Information Theory, Inference and Learning Algorithms. Available online or Amazon
  • [DA] Dayan, P. and Abbott, L.F. Theoretical neuroscience: computational and mathematical modeling of neural systems. Amazon

Discussion forum:  We have established a Piazza site where students can ask questions or propose topics for discussion.

Syllabus

Aug. 23:  Introduction

Aug. 28, 30:  Neuron models

Sept. 4, 6:  Supervised learning

Sept. 11, 13, 18:  Unsupervised learning

Sept. 20, 25, 27:  Sparse, distributed coding

Oct. 2, 4:  Self-organizing maps

  • Plasticity and cortical maps
  • Self-organizing maps, Kohonen nets
  • Models of experience dependent learning and cortical reorganization

Oct. 9:  Manifold learning (Chen)

  • Local linear embedding, Isomap
  • The sparse manifold transform

Oct. 11:  Reinforcement learning (Mobin)

  • Reward-based learning
  • Predicting future rewards via temporal-difference learning

Oct. 16, 18:  Recurrent networks

  • Hopfield networks, memories as ‘basis of attraction’
  • Line attractors and `bump circuits’

Oct. 23, 25:  Probabilistic models and inference

  • Probability theory and Bayes’ rule
  • Learning and inference in generative models
  • The mixture of Gaussians model (Charles Frye)

Oct. 30, Nov. 1:  Boltzmann machines

  • Sampling, inference and learning rules
  • Restricted Boltzmann machines and Energy-based models

Nov. 6, 8:  Independent Components Analysis (ICA)

  • Relation between sparse coding and ‘ICA’
  • Applications

Nov. 13, 15, 20:  Dynamical models

  • Hidden Markov models
  • Kalman filter model
  • Recurrent neural networks

Nov. 27, 29:  Neural coding

  • Integrate-and-fire model
  • Neural encoding and decoding
  • Limits of precision in neurons
  • GLMs

Dec. 4, 6:  High-dimensional (HD) computing

  • Holographic reduced representation; Vector symbolic architectures
  • Computing with 10,000 bits
  • Sparse, distributed memory