Hebrew University, Jerusalem
What makes a good model of natural images?
Tuesday 20th of February 2007 at 12:30pm
Many low-level vision algorithms assume a prior probability over images, and there has been great interest in trying to learn this prior from examples. Since images are very non Gaussian, high dimensional, continuous signals, learning their distribution presents a tremendous computational challenge. Perhaps the most successful recent algorithms are those that model image statistics with a product of potentials defined on filter outputs. However, calculating the probability of an image given these models requires evaluating an intractable partition function. This makes learning very slow and makes it virtually impossible to compare the likelihood of two different models. Given this computational difficulty, it is hard to say whether nonintuitive features learned by such models represent a true property of natural images or an artifact of the approximations used during learning. In this paper we present (1) tractable lower and upper bounds on the partition function of models based on filter outputs and (2) efficient learning algorithms that do not require any sampling. Our results are based on recent results in machine learning that deal with Gaussian potentials. We extend these results to non Gaussian potentials and derive a novel, EM algorithm for approximating the MLE filters. Applying our results to previous models shows that the nonintuitive features are not an artifact of the learning process but rather are capturing robust properties of natural images.
3105 Tolman Hall (Beach Room)
Join Email List
You can subscribe to our weekly seminar email list by sending an email to
email@example.com that contains the words
subscribe redwood in the body of the message.
(Note: The subject line can be arbitrary and will be ignored)