Characterizing neurons in the visual area V4 through interpretable machine learning

Reza Abbasi-Asl

UC Berkeley
Wednesday, May 23, 2018 at 12:00pm
560 Evans

In the past decade, research in machine learning has been exceedingly focused on the development of algorithms and models with remarkably high predictive capabilities. Models such as convolutional neural networks (CNNs) have achieved state-of-the-art predictive performance for many tasks in computer vision, autonomous driving, and transfer learning in areas such as computational neuroscience. However, interpreting these models still remains a challenge, primarily because of a large number of parameters involved.

In this talk, we propose and investigate two frameworks based on (1) stability and (2) compression to build more interpretable machine learning models. These two frameworks will be demonstrated in the context of a computational neuroscience study. First, we introduce DeepTune, a stability-driven visualization framework for CNN-based models. DeepTune is used to characterize biological neurons in the difficult V4 area of primate visual cortex. This visualization uncovers the diversity of stable patterns explained by the V4 neurons. Then, we introduce CAR, a framework for structural compression of CNNs based on pruning filters. CAR increases the interpretability of CNNs while retaining the diversity of filters in convolutional layers. CAR-compressed CNNs give rise to new set of accurate models for V4 neurons but with much simpler structures. Our results suggest, to some extent, that these CNNs resemble the structure of the primate brain.
This is joint work with Bin Yu, but this talk will also cover some work with Yuansi Chen, Adam Bloniarz, Michael Oliver, Ben Willmore and Jack L. Gallant.