Perception has long been considered to be an inference operation in which internal models of the sensory environment are constructed through experience and are subsequently used in order to assign sensory stimuli to appropriate object categories. When confronted with noisy or incomplete sense data, these internal models are used to identify the most likely object category that might account for these sense data. As a result of this process, we perceive an external world populated with distinct objects, rather than the field of continuous sense data that our brains are in fact confronted with. While great progress has been made in recent years in generating algorithms that can successfully categorize naturalistic sensory stimuli, as well as in methods for investigating the brain mechanistically, little progress has been made in linking the algorithmic and mechanistic levels of understanding in neural populations. Here I will present a novel paradigm for investigating how perception is implemented at both of these levels. This paradigm involves employing a principled approach to generate complex synthetic stimuli that can be matched to the perceptual limit of the subject, in order to maximize the likelihood of engaging the perceptual system under investigation. This approach will make it possible to investigate whether different algorithms currently used in state-of-the-art artificial intelligence systems are also used by the brain and, if so, how this is achieved mechanistically.