How can we answer impossible questions in vision like “What would vision look like if evolution started over?” or “What if evolution had taken different paths?”.
Traditional approaches are unable to systematically answer these questions: we can’t re-run large-scale evolution experimentally, evolutionary robotics approaches haven’t evolved vision, and while deep reinforcement learning (DRL) has had much success in discovery in domains such as go or matrix-multiplication, these domains don’t help us discover or understand biological mechanisms behind why vision works the way it does. So, we ask if we can computationally recreate the evolution of vision (eyes and behavior) to understand the “why” questions of vision.
I introduce Generation-Verification loops that transform impossible scientific questions into systematic computational exploration of the vision design space. The framework couples a Generation Engine (that explores vast possibility spaces) with a Verification Engine (that validates against multiple layers of reality: task performance, biological patterns, and physics constraints).
Using vision evolution as our proof-of-concept, we computationally “re-ran” evolution by evolving embodied agents trained via DRL with configurable eyes and neural networks in physics-based environments. Our discoveries reveal principles behind vision evolution: (1) isolated visual tasks drive morphological bifurcation—navigation evolves compound-type eyes while detection evolves camera-type eyes; (2) lens evolution emerges to break the trade-off between light collection and spatial precision; and (3) sensory-neural scaling follows power laws where poor visual acuity creates bottlenecks that neural capacity alone cannot overcome. The framework also generalizes beyond morphology and networks to other aspects of the vision such as foveal organization, temporal processing, or other aspects of the vision which have computational models.
Lastly, I will talk about how these Generation-Verification loops can become a general methodology for AI for scientific discovery and invention. I will argue that these will enable us to 1) systematically ask the “why” and “what if” questions in science while being grounded in reality, and 2) automatically invent new ways to see for artificial embodied systems. https://eyes.mit.edu/
—
To request the Zoom link send an email to jteeters@berkeley.edu. Also indicate if you would like to be added to the Redwood Seminar mailing list.