Informed Approaches to Deep Learning via Neural Networks with Random Parameters

Yasaman Bahri

Google Brain
Tuesday, April 3, 2018 at 11:00am
560 Evans

Obtaining a better understanding of neural networks with random parameters is relevant for deep learning practice — for instance, by informing good initializations — and is a natural first step in building a more complete base of knowledge within deep learning. I will survey some of our recent work at Google Brain which originated from the study of random neural networks. [1]. I’ll begin by discussing an exact correspondence we establish between Gaussian processes and priors over functions computed by deep, infinitely wide neural networks, building on seminal work by Radford Neal. This exact mapping opens a route towards making Bayesian predictions with deep neural networks, without needing to instantiate a network, which we implement and compare to the performance of finite-width networks trained with standard stochastic optimization. [2] Time permitting, I’ll further discuss our work analyzing the forward and backward propagation of signals at initialization in convolutional neural networks (CNNs), continuing a line of such work begun at Brain for different network architectures. The analysis informs initializations which enable the training of “vanilla” CNNs with ten thousand layers or more.