HOME MISSION AND RESEARCH PUBLICATIONS HISTORY PEOPLE SEMINARS COURSES VIDEO ARCHIVE CONTACT

Difference between revisions of "VS265: Syllabus"

From RedwoodCenter

(Probabilistic models and inference)
(Syllabus)
Line 5: Line 5:
 
# Goals of AI/machine learning vs. theoretical neuroscience
 
# Goals of AI/machine learning vs. theoretical neuroscience
 
# Turing vs. neural computation
 
# Turing vs. neural computation
* '''Reading''': '''HKP''' chapter 1
 
  
 
==== Neuron models ====
 
==== Neuron models ====
Line 12: Line 11:
 
# Linear systems: vectors, matrices, linear neuron models
 
# Linear systems: vectors, matrices, linear neuron models
 
# Perceptron model and linear separability
 
# Perceptron model and linear separability
* '''Reading''': '''HKP''' chapter 5, '''DJCM''' chapters 38-40
 
  
 
==== Supervised learning ====
 
==== Supervised learning ====
Line 20: Line 18:
 
# Objective functions and gradient descent
 
# Objective functions and gradient descent
 
# Multilayer networks and backpropagation
 
# Multilayer networks and backpropagation
* '''Reading''': '''HKP''' chapter 6, '''DJCM''' chapters 38-40, 44, '''DA''' chapter 8 (sec. 4-6)
 
  
 
==== Unsupervised learning ====
 
==== Unsupervised learning ====
Line 27: Line 24:
 
# Winner-take-all networks and clustering
 
# Winner-take-all networks and clustering
 
# Sparse, distributed coding
 
# Sparse, distributed coding
* '''Reading''': '''HKP''' chapter 8, '''DJCM''' chapter 36, '''DA''' chapter 8, 10
 
  
 
==== Plasticity and cortical maps ====
 
==== Plasticity and cortical maps ====
Line 35: Line 31:
 
# Models of experience dependent learning and cortical reorganization
 
# Models of experience dependent learning and cortical reorganization
 
# Manifold learning
 
# Manifold learning
* '''Reading''': '''HKP''' chapter 9, '''DA''' chapter 8
 
  
 
==== Recurrent networks ====
 
==== Recurrent networks ====
Line 42: Line 37:
 
# Line attractors and `bump circuits’
 
# Line attractors and `bump circuits’
 
# Dynamical models
 
# Dynamical models
* '''Reading''': '''HKP''' chapters 2, 3 (sec. 3.3-3.5), 7 (sec. 7.2-7.3), '''DJCM''' chapter 42, '''DA''' chapter 7
 
  
 
==== Probabilistic models and inference ====
 
==== Probabilistic models and inference ====
Line 53: Line 47:
 
# Kalman filter model
 
# Kalman filter model
 
# Energy-based models
 
# Energy-based models
* '''Reading''': '''HKP''' chapter 7 (sec. 7.1),'''DJCM''' chapter 1-3, 20-24,41,43, '''DA''' chapter 10
 
  
 
==== Neural implementations ====
 
==== Neural implementations ====
Line 61: Line 54:
 
# Limits of precision in neurons
 
# Limits of precision in neurons
 
# Neural synchrony and phase-based coding
 
# Neural synchrony and phase-based coding
* '''Reading''': '''DA''' chapter 1-4, 5.4
 

Revision as of 04:33, 1 September 2014

Syllabus

Introduction

  1. Theory and modeling in neuroscience
  2. Goals of AI/machine learning vs. theoretical neuroscience
  3. Turing vs. neural computation

Neuron models

  1. Membrane equation, compartmental model of a neuron
  2. Linear systems: vectors, matrices, linear neuron models
  3. Perceptron model and linear separability

Supervised learning

  1. Perceptron learning rule
  2. Adaptation in linear neurons, Widrow-Hoff rule
  3. Objective functions and gradient descent
  4. Multilayer networks and backpropagation

Unsupervised learning

  1. Linear Hebbian learning and PCA, decorrelation
  2. Winner-take-all networks and clustering
  3. Sparse, distributed coding

Plasticity and cortical maps

  1. Cortical maps
  2. Self-organizing maps, Kohonen nets
  3. Models of experience dependent learning and cortical reorganization
  4. Manifold learning

Recurrent networks

  1. Hopfield networks
  2. Models of associative memory, pattern completion
  3. Line attractors and `bump circuits’
  4. Dynamical models

Probabilistic models and inference

  1. Probability theory and Bayes’ rule
  2. Learning and inference in generative models
  3. The mixture of Gaussians model
  4. Boltzmann machines
  5. Sparse coding and ‘ICA’
  6. Kalman filter model
  7. Energy-based models

Neural implementations

  1. Integrate-and-fire model
  2. Neural encoding and decoding
  3. Limits of precision in neurons
  4. Neural synchrony and phase-based coding
Personal tools