Course page

VS298: Natural Scene Statistics

This seminar will examine what is known about the statistical structure of natural visual and auditory scenes, and theories of how sensory coding strategies have been adapted to this structure.

InstructorBruno Olshausen

Enrollment information:

VS 298 (section 4), 2 units
CCN: 66489

Meeting time and place:

Monday 6-8, Evans 560

Email list:

nss2014@lists.berkeley.edu subscribe

Readings:

Books and review articles:

  • Natural Image Statistics by Hyvarinen, Hurri & Hoyer
  • Olshausen BA & Lewicki MS (2013) What natural scene statistics can tell us about cortical representation. In: The Cognitive Neurosciences V. paper
  • Geisler WS (2008) Visual perception and the statistical properties of natural scenes. Annual Review of Psychology paper

Weekly schedule:

Date Topic/Reading Presenter
Feb. 3 Redundancy reduction, whitening, and power spectrum of natural images

  • Barlow (1961): Theory of redundancy reduction paper
  • Atick (1992): Theory of whitening paper
  • Field (1987): 1/f2 power spectrum and sparse coding paper

Additional reading:

  • Attneave (1954) – ‘Some informational aspects of visual perception’ paper
  • Laughlin (1981) – Histogram equalization of contrast response paper
  • Srinivasan (1982) – ‘Predictive coding: a fresh view of inhibition in the retina’ paper
  • Switkes (1978) – Power spectrum of carpentered environments paper
  • Ruderman (1997) – Why are images 1/f2paper
  • Torralba & Oliva (2003) – Power spectrum of natural image categories paper
Anthony DiFranco
Dylan Paiton
Michael Levy
Feb. 10 Whitening in time and color; Robust coding

  • Dong & Atick (1995): spatiotemporal power spectrum of natural movies paper
  • Ruderman (1998): statistics of cone responses paper
  • Karklin & Simoncelli (2012): noisy population coding of natural images paper

Additional reading:

  • Dong & Atick (1995) – spatiotemporal decorrelation using lagged and non-lagged cells paper
  • Doi & Lewicki (2007) – A theory of retinal population coding paper
Chayut Thanapirom
Michael Levy
Yubei Chen
Feb. 17 ** Holiday **
Feb. 24 Higher-order statistics and sensory coding

  • Barlow (1972): Sparse coding paper
  • Field (1994): What is the goal of sensory coding? paper
  • Bell & Sejnowski (1995): Independent component analysis. paper

Additional reading:

  • Redlich (1993): Redundancy Reduction as a Strategy for Unsupervised Learning. paper
  • Baddeley (1996): Searching for filter with ‘interesting’ output distributions: An uninteresting direction to explore? paper
  • O’regan & Noe (2001): A sensorimotor account of vision and visual consciousness paper
Karl Zipser
Michael Levy
Mayur Mudigonda
March 3 ICA and sparse coding of natural images

  • Bell & Sejnowski (1997): ICA of natural images paper
  • Olshausen & Field (1997): Sparse coding of natural images paper
  • van Hateren & Ruderman (1998), Olshausen (2003): ICA/sparse coding of natural video paper1paper2

Additional reading:

  • Olshausen & Field (1996): simpler explanation of sparse coding paper
Mayur Mudigonda
Zayd Enam
Georgios Exarchakis
March 11 **Tuesday** Statistics of natural sound and auditory coding

  • Clark & Voss: ‘1/f noise and music’ paper
  • Smith & Lewicki: sparse coding of natural sound paper
  • Klein/Deweese: ICA/sparse coding of spectrograms paper1paper2
Tyler Lee
Yubei Chen
TBD
March 17 Higher-order group structure

  • Geisler: contour statistics paper
  • Hyvarinen: subspace ICA/topgraphic ICA paper1paper2
  • Lyu & Simoncelli: radial Gaussianization paper

Additional reading:

  • Parent & Zucker (1989): Trace Inference, Curvature Consistency, and Curve Detection, paper
  • Field et al. (1993): Contour Integration by the Human Visual System: Evidence for a Local “Association Field” paper
  • Zetzsche et al. (1999): The atoms of vision: Cartesian or polar? paper
  • Garrigues & Olshausen (2010): Group Sparse Coding with a Laplacian Scale Mixture Prior, paper
Chayut Thanapirom
Guy Isely
TBD
March 24 ** Spring recess **
March 31 Energy-based models

  • Hinton: Product of experts models, paper
  • Osindero & Hinton: Product of Experts model of natural images, paper
  • Roth & Black: Fields of experts, paper

Additional reading:

  • Hinton: Practical guide to training RBMs paper
  • Teh et al: Energy-based models for sparse overcomplete representation, paper
  • Zhu, Wu & Mumford: FRAME (Filters, random fields, and maximum entropy), paper
Evan Shelhamer
Brian Cheung
Chris Warner
April 7 Learning invariances through ‘slow feature analysis’

  • Foldiak/Wiskott: slow feature analysis, paper1paper2
  • Hyvarinen: ‘Bubbles’ paper
  • Berkes et al.: factorizing ‘what’ and ‘where’ from video, paper
Guy Isely
Chayut Thanapirom
Bharath Hariharan
April 14 Manifold and Lie group models

  • Carlsson et al.: Klein bottle model of natural images, paper
  • Culpepper & Olshausen: Learning manifold transport operators, paper
  • Roweis & Saul: Local Linear Embedding, paper
Yubei Chen
Bruno/Mayur
James Arnemann
April 21 Hierarchical models

  • Karklin & Lewicki (2003): density components, paper
  • Shan & Cottrell: stacked ICA, paper
  • Cadieu & Olshausen (2012): learning intermediate representations of form and motion, paper
Tyler Lee
Brian Cheung
Dylan Paiton
April 28 Deep network models

  • Hinton & Salakhudinov (2006): stacked RBMs, paper
  • Le et al. (2011): Unsupervised learning (Google brain, ‘cat’ neurons), paper
  • Krishevsky et al. (2012): Supervised learning, ImageNet 1000 paper
TBD
TBD
Reza Abbasi-Asl
May 6
Note: Tuesday
Special topics

  • Fergus (2013): visualizing what deep nets learn paper
  • Schmidhuber: deep nets (paper), focusing on LOCOCODE (paper)
  • Image compression with Hopfield networks
Shiry Ginosar
Anthony DiFranco
Chris Hillar
May 12 Special topics