Artificial Neural Networks (ANNs) form a powerful class of models with both theoretical and practical advantages. Networks with more than one hidden layer (“deep” neural networks) compute multiple functions on later layers that share the use of intermediate results computed on earlier layers. This compositional, hierarchical structure provides a strong bias, or regularization, toward solutions that seem to work well on a large variety of real-world problems.
In this talk I will begin by showing a few examples of how this general compositional bias can excel at such diverse tasks as designing robot gaits and 3D objects. I will then discuss a few simple experiments that shed light on the inner workings of neural nets trained to classify images. The first study examines the computation performed by the entire set of neurons on a layer in a network, and subsequent work illuminates the computation performed by individual units, and finally the computation performed by the network as a whole. The experiments taken together reveal some surprising behaviors of large networks and lead to a greater understanding and intuition for the computation performed by deep neural nets.
Jason Yosinski is a PhD student and NASA Space Technology Research Fellow working on machine learning and computer vision, mostly at the Cornell Creative Machines Lab, but sometimes at the University of Montreal and the Jet Propulsion Laboratory. His research focuses on building and understanding neural network models that allow robots to learn how to walk and computers to perceive the visual world. Since starting grad school, he has helped create the first artificially intelligent guest to be interviewed on NPR, and his work in AI has been featured in New Scientist, Fast Company, The Economist, TEDx, and the BBC. Before coming to Cornell, Mr. Yosinski graduated from Caltech, worked at a statistics startup, and spent a year developing a program in Pasadena that tricks middle school students into learning math while they play with robots.