Are there any benefits in incorporating the foveated nature of human vision into image-based metrics of perception and computer vision systems? In this talk I hope to advance our understanding of this question through my work via psychophysical experiments (eye-tracking), computational modelling, and computer vision.
The first part of the talk will revolve around peripheral representations applied to human perception via their use to enhance current computational models of clutter. Our foveated clutter models use a layer of foveated pooling on top of dense feature representations to simulate crowding effects and perceptual losses as a function of retinal eccentricity. In our experiments I show that peripheral representations enhance non-foveated clutter models when correlated to behavioural outputs when observers are engaged in visual search. The second part of the talk will present a new model of foveated rendering: the NeuroFovea model which produces approximations of visual metamers via feed forward foveated style-transfer networks. I will show work in progress regarding the limitations and advantages of generative foveated models, as well as the application of these models to advanced computer vision systems that extend deep learning.