Difference between revisions of "VS265: Reading"
From RedwoodCenter
Line 47: | Line 47: | ||
* Atick, Redlich. [http://redwood.berkeley.edu/vs265/Atick-Redlich-NC92.pdf What does the retina know about natural scenes?], Neural Computation, 1992. | * Atick, Redlich. [http://redwood.berkeley.edu/vs265/Atick-Redlich-NC92.pdf What does the retina know about natural scenes?], Neural Computation, 1992. | ||
* Dan, Atick, Reid. [http://www.jneurosci.org/cgi/reprint/16/10/3351.pdf Efficient Coding of Natural Scenes in the Lateral Geniculate Nucleus: Experimental Test of a Computational Theory], J Neuroscience, 1996. | * Dan, Atick, Reid. [http://www.jneurosci.org/cgi/reprint/16/10/3351.pdf Efficient Coding of Natural Scenes in the Lateral Geniculate Nucleus: Experimental Test of a Computational Theory], J Neuroscience, 1996. | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
====Sept 30, Oct 2: Attractor Networks and Associative Memories ==== | ====Sept 30, Oct 2: Attractor Networks and Associative Memories ==== | ||
+ | * "HKP" Chapter 2 and 3 (sec. 3.3-3.5), 7 (sec. 7.2-7.3), '''DJCM''' chapter 42, '''DA''' chapter 7 | ||
* [http://redwood.berkeley.edu/vs265/attractor-networks.pdf Handout] on attractor networks - their learning, dynamics and how they differ from feed-forward networks | * [http://redwood.berkeley.edu/vs265/attractor-networks.pdf Handout] on attractor networks - their learning, dynamics and how they differ from feed-forward networks | ||
− | |||
* [http://redwood.berkeley.edu/vs265/hopfield82.pdf Hopfield82] | * [http://redwood.berkeley.edu/vs265/hopfield82.pdf Hopfield82] | ||
* [http://redwood.berkeley.edu/vs265/hopfield84.pdf Hopfield84] | * [http://redwood.berkeley.edu/vs265/hopfield84.pdf Hopfield84] | ||
Line 66: | Line 57: | ||
====Oct 7: Ecological utility and the mythical neural code ==== | ====Oct 7: Ecological utility and the mythical neural code ==== | ||
* [ftp://ftp.icsi.berkeley.edu/pub/feldman/eeu.pdf Feldman10] Ecological utility and the mythical neural code | * [ftp://ftp.icsi.berkeley.edu/pub/feldman/eeu.pdf Feldman10] Ecological utility and the mythical neural code | ||
+ | |||
+ | |||
+ | <!-- plasticity and cortical maps | ||
+ | * '''Reading''': '''HKP''' chapter 9, '''DA''' chapter 8 --> | ||
+ | <!-- probabilistic models and inference | ||
+ | * '''Reading''': '''HKP''' chapter 7 (sec. 7.1),'''DJCM''' chapter 1-3, 20-24,41,43, '''DA''' chapter 10 --> | ||
+ | <!-- neural implementations | ||
+ | * '''Reading''': '''DA''' chapter 1-4, 5.4 --> |
Revision as of 21:38, 9 October 2014
Contents
- 1 Aug 28: Introduction
- 2 Sept 2: Neuron models
- 3 Sept 4: Linear neuron, Perceptron
- 4 Sept 11: Multicompartment models, dendritic integration
- 5 Sept. 16, 18: Supervised learning
- 6 Sept. 23, 24: Unsupervised learning
- 7 Sept 30, Oct 2: Attractor Networks and Associative Memories
- 8 Oct 7: Ecological utility and the mythical neural code
Aug 28: Introduction
- HKP chapter 1
- Dreyfus, H.L. and Dreyfus, S.E. Making a Mind vs. Modeling the Brain: Artificial Intelligence Back at a Branchpoint. Daedalus, Winter 1988.
- Bell, A.J. Levels and loops: the future of artificial intelligence and neuroscience. Phil Trans: Bio Sci. 354:2013--2020 (1999) here or here
- 1973 Lighthill debate on future of AI
Optional:
- Land, MF and Fernald, RD. The Evolution of Eyes, Ann Revs Neuro, 1992.
- Zhang K, Sejnowski TJ (2000) A universal scaling law between gray matter and white matter of cerebral cortex. PNAS, 97: 5621–5626.
- O'Rourke, N.A et al. "Deep molecular diversity of mammalian synapses: why it matters and how to measure it." Nature Reviews Neurosci. 13, (2012)
- Stephen Smith Array Tomography movies
- Solari & Stoner, Cognitive Consilience
Sept 2: Neuron models
- Mead, C. Chapter 1: Introduction and Chapter 4: Neurons from Analog VLSI and Neural Systems, Addison-Wesley, 1989.
- Carandini M, Heeger D (1994) Summation and division by neurons in primate visual cortex. Science, 264: 1333-1336.
Background reading on dynamics, linear time-invariant systems and convolution, and differential equations:
Sept 4: Linear neuron, Perceptron
- HKP chapter 5, DJCM chapters 38-40, 44, DA chapter 8 (sec. 4-6)
- Linear neuron models
Background on linear algebra:
- Linear algebra primer
- Jordan, M.I. An Introduction to Linear Algebra in Parallel Distributed Processing in McClelland and Rumelhart, Parallel Distributed Processing, MIT Press, 1985.
Sept 11: Multicompartment models, dendritic integration
- Koch, Single Neuron Computation, Chapter 19 pdf
- Rhodes P (1999) ￼￼￼Functional Implications of Active Currents in the Dendrites of Pyramidal Neurons
- Schiller J (2003) Submillisecond Precision of the Input–Output Transformation Function Mediated by Fast Sodium Dendritic Spikes in Basal Dendrites of CA1 Pyramidal Neurons
Sept. 16, 18: Supervised learning
- HKP Chapters 5, 6
- Handout on supervised learning in single-stage feedforward networks
- Handout on supervised learning in multi-layer feedforward networks - "back propagation"
Further reading:
- Y. LeCun, L. Bottou, G. Orr, and K. Muller (1998) "Efficient BackProp," in Neural Networks: Tricks of the trade, (G. Orr and Muller K., eds.).
- NetTalk demo
Sept. 23, 24: Unsupervised learning
- HKP Chapters 8 and 9, DJCM chapter 36, DA chapter 8, 10
- Handout: Hebbian learning and PCA
- PDP Chapter 9 (full text of Michael Jordan's tutorial on linear algebra, including section on eigenvectors)
Optional:
- Atick, Redlich. What does the retina know about natural scenes?, Neural Computation, 1992.
- Dan, Atick, Reid. Efficient Coding of Natural Scenes in the Lateral Geniculate Nucleus: Experimental Test of a Computational Theory, J Neuroscience, 1996.
Sept 30, Oct 2: Attractor Networks and Associative Memories
- "HKP" Chapter 2 and 3 (sec. 3.3-3.5), 7 (sec. 7.2-7.3), DJCM chapter 42, DA chapter 7
- Handout on attractor networks - their learning, dynamics and how they differ from feed-forward networks
- Hopfield82
- Hopfield84
- Willshaw69
Oct 7: Ecological utility and the mythical neural code
- Feldman10 Ecological utility and the mythical neural code