My fascination with computation began with my undergraduate study of Electrical Engineering and Computer Science at UC Berkeley. I was astounded by the amazing things software could do: building searchable catalogs of the world wide web, mapping roads across the world, and much more. Towards the end of my studies I felt something lacking though, even the latest machine learning techniques seemed to fall short of human level computation. It was then that I began reading neuroscience in my free time, but nothing became of it formally. Years later I became bored of the my software engineering job and decided to pursue my interest in understanding, and building models of, human-level computation. It was then that I joined the Redwood Center for Theoretical Neuroscience, I am now a third year PhD student in the Neuroscience department, working with Bruno Olshausen. In my free time I love to cook, play music, and travel.
Our perception of sound consists of simultaneous sound-producing events which contain information about location, quality, and meaning. However this information is not explicit in the signal, the sound waveform that enters our ears is a composite of the individual sources in the world. In my research I focus on building causal, real-time, neural network models which are able to replicate aspects of this incredible behavior.