Bruno Olshausen

Reveal Contact Info

Professor

Helen Wills Neuroscience Institute and School of Optometry

Olshausen Lab

Current Research

How does the brain build representations of 3D surfaces, material properties, objects, and motion from the voltage fluctuations of photoreceptors in the retina? Somehow, nature has figured out a solution to this daunting problem. The desire to solve this puzzle—reverse engineering nature’s solution—is what drives my lab’s research. While neurophysiological and neuroanatomical studies over the past half century have revealed much about the structure and function of the visual system, this approach alone will not solve the puzzle. The problem we currently face in neuroscience is not a lack of data per se, but rather the inability to ask the right questions. If we knew what were the basic principles that allow a system to ‘see’, then we would know what components to look for. But currently we do not have such principles. For this reason, I am focusing my efforts on developing new theoretical frameworks and models of vision that could help guide our thinking in experimental neuroscience.

Background

I started out as an engineer wanting to build robots inspired by how brains work, and I ended up as a neuroscientist attempting to understand how nervous systems process information, inspired and guided by engineering principles. I first learned about neural networks as a student at Stanford, through Bernie Widrow’s course on Adaptive Filters and Misha Pavel and Ilan Vardi’s Connectionist Models seminar (1986/87). I then found my way to Pentti Kanerva’s Sparse Distributed Memory (SDM) research group at NASA/Ames, where I worked for two years as a research assistant to develop vision applications of SDM. During this time I learned about Charlie Anderson and David Van Essen’s work on ‘shifter circuits,’ which eventually led to my doing a Ph.D. under their joint supervision as a student in the CNS program at Caltech in the early 1990’s. My thesis was on dynamic routing circuits, essentially a generalization of shifter circuits which could serve as a neural mechanism for forming position and size (and rotation) invariant representations in the visual cortex. Toward the end of my thesis work I learned about David Field’s work on models of sensory coding based on natural image statistics, which seemed like a promising way to learn feature representations at various stages of the cortical hierarchy. It remains on of my central goals to bring these two ideas – i.e., dynamic routing and feature learning – together  in order to build a hierarchical model of the visual cortex. I began my academic career as an Assistant Professor in the Department of Psychology (and later, Neurobiology, Physiology and Behavior) and Center for Neuroscience at UC Davis. In 2002, together with Pentti Kanerva and Fritz Sommer, I helped Jeff Hawkins to launch the Redwood Neuroscience Institute (RNI). In 2005 RNI was incorporated into UC Berkeley’s Helen Wills Neuroscience Institute and renamed the Redwood Center for Theoretical Neuroscience, for which I currently serve as director.