Five new PhD graduates!

September 3, 2019

It is with great pleasure that we highlight our five (!!) newly-minted PhD graduates.


Brian Cheung

Brian’s thesis work focused on structures that emerge in neural networks during learning. One major theme of this work was studying the disentanglement of latent features, with a particular paper leveraging semi-supervised autoencoders to learn factors of variation in image datasets of digits and faces. Another work developed a theory for foveal image sampling based on learning constraints imposed during the acquisition of visual search and recognition skills. Brian’s advisor was Bruno Olshausen.

Brian plans to continue his work on learning algorithms as a postdoctoral scholar, focusing on how to manipulate and control learning. He hopes this future work will be broadly useful to machine learning and open up applications to new environments and types of data.



Chris Warner

Chris’s thesis work developed a theory of image representation in the retina based on phase coupling among neural populations in the inner retinal layers. It reconsidered the textbook view on the function of bipolar, amacrine, and retinal ganglion cells, proposing a theory for phase encoding among these cells that he showed was useful for segmenting features of ethologically-relevant stimuli. This theory complements more traditional retinal models based on spike rates, superimposing onto these codes precise spike-timing information that captures statistical structure across large fields of view. Chris also developed a technique for detection of cell assemblies based on a latent variable statistical model of this phenomenon, proposing that the technique may be useful as a general neural data analysis tool for systems beyond the retina. Chris’s advisor was Fritz Sommer.

Chris is currently on tour with his band, Outta Thin Air.



Dylan Paiton

Dylan’s thesis focused on the Locally Competitive Algorithm (LCA), a recurrent neural network for performing sparse coding and dictionary learning. Unlike many popular artificial neural network models, the LCA includes lateral connectivity among neurons which produce ‘population nonlinearities’ that provide desirable coding properties, including improved robustness, selectivity, and efficiency. Dylan’s thesis provides an investigation of LCA that includes an in-depth analysis of its response properties and several novel extensions. The core computational principles in the LCA are distinct from artificial neural networks most often used in industrial settings, and he argues that there is much to be gained from incorporating LCA-like computations in exchange for feed-forward, point-wise nonlinear neurons. Dylan’s advisor was Bruno Olshausen.

Dylan will be starting a postdoc in the Bethge Lab at the University of Tübingen in Germany this October. He will be developing hierarchical unsupervised learning models for robust computer vision applications.



Mayur Mudigonda

Mayur’s thesis focuses on two different topics: 1) Hamiltonian Markov Chain Monte Carlo (HMC) algorithms for sampling from complex probabilistic models and 2) the integration of tactile information with other sensory modalities in service of dextrous manipulation. His prior HMC work focused on methods without ‘detailed balance’ as a way to improve the convergence rate of HMC methods. The work on haptics explored a range of different dextrous manipulation tasks in both simulated and real-world environments where haptic information was used to augment visual information. Mayur’s advisor was Mike DeWeese.

Mayur is currently enjoying some time to meditate and will be applying for research scientist positions this Fall.



Shariq Mobin

Shariq’s thesis focused on building computational models for solving the cocktail party problem. The cocktail party problem describes the brain’s ability to selectively focus on a particular auditory stimulus while ignoring many others, as one does to communicate with a friend at a noisy cocktail party. It is believed that this ability is mediated by top-down feedback connections in the auditory pathway that instruct which auditory information is currently relevant. Shariq developed novel neural network architectures that incorporate top-down information in order to modulate the auditory stimuli extracted from a mixture of sounds. His work made progress towards tackling the cocktail party problem by utilizing insights from auditory neuroscience. Shariq’s advisor was Bruno Olshausen.

Shariq has recently founded the company AudioFocus, where he will be extending the ideas of his thesis to develop the world’s first hearing aid that works in noisy restaurants. You can find out more about AudioFocus on their website and on TechCrunch.