Media Archive

Redwood Center media includes recordings of past seminars, past course offerings and other content.

The best way to search this archive is via keywords in conjunction with Ctrl/Cmd-f.

Time-reversal symmetry breaking in nonreciprocal systems
December 6, 2024
Sarah Loos
University of Cambridge

While all fundamental physical interactions obey the law of action-equals-reaction, the dynamics we effectively observe in complex nonequilibrium systems ubiquitously break reciprocity at different scales. In this talk, I will discuss the intricate connection between nonreciprocity and time-reversal symmetry breaking, and how irreversibility

Scale-covariant neural representations of time
December 4, 2024
Marc Howard
Boston University

A substantial body of work over the last decade plus has converged on a tractable computational model for how the brain represents the time of past events in the ongoing firing of neurons. Populations of neurons in a variety of brain regions code for time, but with two distinct forms of receptive fields. So-called temporal context cells, observed in a variety of brain regions, respond to

Hyperdimensional Computing for Real-World IoT Applications at the Edge
November 13, 2024
Xiaofan Yu
UCSD

On-device learning has emerged as a prevailing trend that enables intelligence on edge devices for IoT applications. However, there remain multiple gaps before large-scale deployments in the real world, including dynamic and changeable environments, limited hardware resources, heterogeneous sensor modalities, etc. Hyperdimensional Computing (HDC) is an

Successes and challenges in computational models of vision
September 18, 2024
Gabriel Kreiman
Harvard

We now have powerful computer vision algorithms that can segment scenes, label objects, and recognize actions. It is tempting to use these algorithms as models of visual processing in biological brains. I will provide an overview of some of the successes in using neural network models to partially describe visual behavior

Recurrence through the bottleneck: theory-driven experiments on the Central-peripheral Dichotomy
May 24, 2024
Zhaoping Li
Max Planck Institute for Biological Cybernetics, Tuebingen

I present the Central-peripheral Dichotomy (CPD) theory (Zhaoping 2017, 2019) together with visual psychophysics tests of its predictions. CPD states that central vision is specialized for seeing (recognizing), and peripheral vision for looking (shifting gaze/attention). Given an information

Neurons as feedback controllers
May 22, 2024
Dmitri Chklovskii
CCN, Flatiron Institute, Simons Foundation and NYU Medical Center

Building upon the efficient coding and predictive information theories, we present a novel perspective that neurons not only predict but may also actively influence their future inputs through their outputs. We model neurons as feedback controllers of their environments, a role traditionally considered computationally demanding, particularly

RNN multitasking with bias modulations
May 10, 2024
Luca Mazzucato
University of Oregon

Changes in an animal’s behavioral state, such as arousal and movements, induce complex modulations of the baseline input currents to sensory areas, eliciting sensory modality-specific effects. A simple computational principle explaining the effects of baseline modulations to recurrent cortical circuits is lacking. We investigate the benefits of baseline modulations, referred to as biases in machine learning, using a reservoir computing

Vision and eye movements in natural behavior from insects to primates
April 18, 2024
Jean-Michel Mongeau
Pennsylvania State University

Every day we coordinate eye, head, and body movements seamlessly to go about our daily activities. Similarly, flying insects coordinate eye and body movements to orient in space. A challenge in studying vision is that moving eyes are coupled to a moving body. Another challenge is that movement is inherently closed-loop: e.g., information flows

Hyperdimensional computing meets coding theory
March 15, 2024
Netanel Raviv
Washington University in St. Louis

Hyperdimensional Computing (HDC) is an emerging computational paradigm for representing compositional information as high-dimensional vectors, and has a promising potential in applications ranging from machine learning to neuromorphic computing. One of the longstanding challenges in HDC is the decomposition of compositional information to its

White-Box Transformers via Sparse Rate Reduction
February 21, 2024
Sam Buchanan
Toyota Technological Institute at Chicago (TTIC)

In this talk, we contend that a natural objective of representation learning is to compress and transform the distribution of the data, say sets of tokens, towards a low-dimensional Gaussian mixture supported on incoherent subspaces. The goodness of such a representation can be evaluated by a principled measure, called sparse rate reduction, that

Studying the Topology of Learning-Capable Spaces
November 8, 2023
Kylie Huch
LBL

“What is consciousness?” is a highly general question, a much more quantifiable sub-question is “what are the common mathematical properties of learning-capable systems?”. In this talk we will examine a number of architectures implementing learning systems, the features determining

A Scalable Performance Benchmark for Reinforcement Learning in Natural Environments
September 13, 2023
Peter Loxley
University of New England, Australia

Reinforcement learning (RL) provides a general suite of algorithms for the approximate solution of many important and interesting optimal control tasks and sequential decision-making problems. At the heart of these problems is a complex tradeoff between exploration of new states

A Sparse Quantized Hopfield Network for Online-Continual Memory
September 6, 2023
Nick Alonso
UC Irvine

An important difference between brains and deep neural networks is the way they learn. Nervous systems learn online where a stream of noisy data points are presented in a non-independent, identically distributed (non-i.i.d.) way. Further, synaptic plasticity in the brain depends only on information local to synapses. Deep networks, on the other hand,

PASS: An Asynchronous Ising Accelerator for Probabilistic Computing and Machine Intelligence
August 30, 2023
Saavan Patel
UC Berkeley

As the demand for big data increases and the speed of traditional CPUs cannot keep pace, new computing paradigms and architectures are needed to meet the demands for our data hungry world. To keep pace with this, Ising Computing and probabilistic computing have emerged as a method to solve NP-Hard

Looking and seeing in human vision in light of a severe attentional processing bottleneck in the brain
July 6, 2023
Zhaoping Li
Max Planck Institute for Biological Cybernetics

Visual attention selects only a tiny fraction of visual input information for further processing and recognition. This makes human blind to most of visual inputs. Attentional selection starts at the primary visual cortex (V1), which creates a bottom-up saliency map to guide

Navigating cortical maps with topographic deep neural networks: a unifying principle for functional organization
June 14, 2023
Eshed Margalit
Stanford University

Virtually all cortical systems feature “functional organization”: the arrangement of neurons with specific functional properties into characteristic spatial arrangements. Functional organization is ubiquitous across systems and species, highly reproducible, and central to our understanding of

Neural representations for predictive processing of dynamic visual signals
May 31, 2023
Pierre-Etienne Fiquet
New York University

All organisms make temporal predictions, and their evolutionary fitness level generally scales with the accuracy of these predictions. In the context of visual perception, observer motion and continuous deformations of objects and textures structure the dynamics of visual signals, which allows for partial prediction of future inputs from past ones. Here, we propose

Capacity Analysis of Vector Symbolic Architectures
April 3, 2023
Ken Clarkson
IBM Research

Hyperdimensional computing (HDC) is a biologically-inspired framework which represents symbols with high-dimensional vectors, and uses vector operations to manipulate them. The ensemble of a particular vector space and a prescribed set of vector operations (including one addition-like for “bundling” and one outer-product-like for “binding”) form a vector symbolic architecture (VSA). While VSAs have been

MARS via LASSO
March 15, 2023
Adityanand Guntuboyina
UC Berkeley

MARS is a popular method for nonparametric regression introduced by Friedman in 1991. MARS fits simple nonlinear and non-additive functions to regression data. We propose and study a natural LASSO variant of the MARS method. Our method is based on least squares estimation over a convex class of functions

Using noise to probe recurrent neural network structure and prune connections
February 28, 2023
Rishidev Chaudhuri
UC Davis

Many networks in the brain are sparsely connected, and the brain eliminates connections during development and learning. How could the brain decide which synapses to prune? In a recurrent network, determining the importance of a connection between two neurons is a difficult computational problem, depending on

Noncommuting conserved quantities in quantum thermodynamics can increase entanglement
February 22, 2023
Shayan Majidy
University of Waterloo

Across thermodynamics, systems exchange quantities that are conserved globally, such as energy, particles, and electric charges. These quantities are represented by Hermitian operators, which have traditionally been assumed to commute with each other. However, noncommutation is a key feature of quantum theory. A major challenge in studying how charges’ noncommutation influences thermodynamic phenomena is isolating

The Pitfalls and Potentials of Subjective Quality Ratings
January 25, 2023
Simon Del Pin
Colorlab, Norwegian University of Science and Technology

Subjective ratings given by observers are a critical part of research in image and video quality assessment. In this talk, I first review different approaches to subjective ratings and identify potential pitfalls that are often overlooked. Using existing and newly collected data, I statistically demonstrate the non-linear use of the scale, changes in ratings throughout the experiment, individual observer differences in rating features, and biases from allowing observers to decide how many ratings

Multiple Overlapping Cell Assemblies Active During Motor Behavior
November 21, 2022
Alessandra Stella
Jülich Research Center, Germany

The cell assembly hypothesis postulates that information processing in the brain entails the repetitive co-activation of groups of neurons. The activation of such assemblies would lead to spatio-temporal spike patterns (STPs) at the resolution of a few milliseconds. In order to test the cell assembly hypothesis, we searched for significant STPs in parallel spike trains, using the SPADE method. We analyzed experimental data from the motor cortex (M1/PMd) of macaque monkeys

Feedback modulation of neural manifolds in macaque primary visual cortex
November 21, 2022
Aitor Morales-Gregorio
Jülich Research Center, Germany

High-dimensional brain activity is in many cases organized into lower-dimensional neural manifolds. Feedback from V4 to V1 is known to mediate visual attention and computational work has shown that it can also rotate neural manifolds in a context-dependent manner. However, whether feedback signals can modulate neural manifolds in vivo remains to be ascertained. Here, we studied the neural manifolds in macaque (Macaca mulatta) visual cortex during resting state

Exploring Vector Symbolic Architectures for Applications in Computer Vision and Signal Processing
July 6, 2022
Kenny Schlegel
Chemnitz University of Technology, Chemnitz, Germany

Vector Symbolic Architectures (VSAs) combine a high-dimensional vector space with a set of carefully designed operators in order to perform symbolic computations with large numerical vectors. Major goals are the exploitation of their representational power and ability to deal with fuzziness and ambiguity. The basis

Plasticity of the Drosophila head direction network
June 15, 2022
Yvette Fisher
UC Berkeley

In the Drosophila brain, head direction neurons form a network whose activity tracks the angular position of the fly using both self-movement and visual inputs. Our recent work seeks to understand the circuit and synaptic mechanisms that allow these internal and external signals to be seamlessly combined into a coherent sense of direction. Using in vivo electrophysiology and calcium imaging in combination with virtual reality we show that

Space is a latent sequence: a unifying theory of representations and remapping in the hippocampus
May 18, 2022
Dileep George
Vicarious AI

Place fields in the hippocampus show a variety of remapping phenomena in response to environmental changes. These remapping phenomena get characterized in terms of different types of coding of spatial information — object vector cells, landmark vector cells, distance coding, etc. But what if these phenomena are side effects of

What could be the data structures of the Mind?
March 30, 2022
Rina Panigrahy
Google

(Slides) What is a reasonable architecture for an algorithmic view of the mind? Is it akin to a single giant deep network or is it more like several small modules connected by some graph? How is memory captured — is it some lookup table? Take a simple event like meeting someone over coffee — how would your mind remember who the person was,

Consciousness is Supported by Near-Critical Slow Cortical Electrodynamics
March 16, 2022
Daniel Toker
UCLA

Mounting evidence suggests that during conscious states, the electrodynamics of the cortex are poised near a critical point or phase transition and that this near-critical behavior supports the vast flow of information through cortical networks during conscious states. Here, we empirically identify a mathematically specific critical point near which waking cortical oscillatory dynamics operate, which is known as the edge-of-chaos critical point, or the boundary between stability and chaos. We do so by applying

Learning accurate and interpretable models using the Tree Alternating Optimization (TAO) algorithm
March 3, 2022
Miguel Carreira-Perpinan
UC Merced

Decision trees, one of the oldest statistical and machine learning models, are widely used as interpretable models and as the basis of random and boosted forests. However, their true power remains underexploited.

Finding Biological Plausibility for Adversarially Robust Features via Metameric Tasks
February 23, 2022
Arturo Deza
MIT

Recent work suggests that feature constraints in the training datasets of deep neural networks (DNNs) drive robustness to adversarial noise (Ilyas et al., 2019). The representations learned by such adversarially robust networks have also been shown to be

Revealing visual memory representations with massive online experiments
December 9, 2021
Thomas Langlois
Princeton University and UC Berkeley

The visual system must constantly generate meaning by combining noisy sensory information with efficient internal representations. However, as essential as these hidden representations are in shaping many aspects of cognition and behavior, they can be difficult to measure directly. In my work, I apply adaptive sampling techniques

Closed-Loop Data Transcription via Minimaxing Rate Reduction
December 2, 2021
Yi Ma
EECS, UC Berkeley

This work proposes a new computational framework for learning an explicit generative model for real-world datasets. More specifically, we propose to learn a closed-loop transcription between a multi-class multi-dimensional data distribution and a linear discriminative representation (LDR) in the feature space that consists of multiple

From Attention to Consciousness? — Can we build sentient machines?
November 18, 2021
Joscha Bach
Intel labs Cognitive Computing Group

What is the most important unsolved problem in Artificial Intelligence? It may well be the problem of creating a unified model of the universe, to which all observations of the system can be related to. This requires representations that are universal and dynamic, combine tacit and propositional knowledge, and

Sparsity and Credit Assignment in Analog Neuromorphic Systems
October 28, 2021
Jack Kendall
Rain Neuromorphics

Analog compute-in-memory (CIM) architectures for low-power neural networks have recently been shown to achieve excellent compute efficiency and high accuracy, comparable to software-based deep neural networks. However, two primary limitations prevent them from reaching their potential: 1) resistive crossbars have difficulty scaling to large, sparse networks; and

The Thousand Brains Theory of Intelligence and its Implications for AI
April 21, 2021
Jeff Hawkins and Subutai Ahmad
Numenta, Inc.

The neocortex exhibits a detailed architecture that is largely preserved across different functional areas and between species. This has led to the idea of a common cortical algorithm that underlies all aspects of perception, language, and thought. Whether such a common algorithm exists and what it could be has been debated for decades. Our team has proposed an answer

Reverse-engineering the visual system with artificial neural networks (and a bit of maths)
March 31, 2021
Stéphane Deny
Facebook AI Research

Why is visual information transmitted through many parallel channels in the optic nerve, with each channel encoding a different feature-map of the visual scene? Why do neurons in the retina prefer disk-shaped light dots and in the brain oriented lines? In this talk we will see how these simple questions can be investigated using artificial neural networks and a bit of maths.

Multiplicative coding and factorization in vector symbolic models of cognition
December 16, 2020
Spencer Kent
Redwood Center for Theoretical Neuroscience, UC Berkeley

Perception and cognition are teeming with signals and concepts that interact in a multiplicative way, and it is largely this pattern of combination that generates the awesome variability and complexity of our physical and mental worlds. Human brains are prodigious in contending with such complexity, in part due to their ability to factor concepts into more

Beyond fixation: active foveal processing in freely viewing primates
November 18, 2020
Jacob Yates
University of Maryland, Department of Biology

Most of the core computational concepts in visual neuroscience come from studies using anesthetized or fixating subjects. Although this approach is designed to maximize experimental control – by stabilizing the subject’s gaze – the net result is that the most commonly used visual stimulus is a fixation point and little is known about how active visual behavior shapes the encoding process

Modeling memory and learning with a scale-invariant neural timeline
November 13, 2020
Zoran Tiganj
Indiana University

Building artificial agents that can mimic human learning and reasoning has been a longstanding objective in artificial intelligence. I will discuss some of the empirical data and computational models from neuroscience and cognitive science that could help us advance towards this goal. Specifically, I will talk about the importance of structured representations of knowledge, particularly about mental or cognitive maps for time, space, and

Symbolic binding with distributed representations
October 30, 2020
Friedrich Sommer
Redwood Center for Theoretical Neuroscience, UC Berkeley

Connectionism is a movement in Psychology that began in the 1980ies to understand cognition using neural network models operating on distributed representations. Connectionism not only greatly advanced traditional neural networks, i.e. working out error back propagation, it also identified their limitations. Importantly, traditional neural networks lack a full set of operations required for symbolic reasoning, in particular,

Probabilistic computation in natural vision
October 21, 2020
Jeremy England
Georgia Institute of Technology

Self-organization is frequently observed in active collectives, from ant rafts to molecular motor assemblies. General principles describing self-organization away from equilibrium have been challenging to identify. We offer a unifying framework that models the behavior of complex systems as largely random, while capturing their driven response properties. Such a “low-rattling principle” enables prediction and control of fine-tuned emergent properties in

Spatiotemporal Power Spectrum of Natural Vision (VSS 2020)
June 19, 2020
Vasha DuTell
Redwood Center, UC Berkeley

When engaging in natural tasks, the human visual system processes a highly dynamic visual data stream. The retina, performing the very first steps in this processing, is thought to be adapted to take advantage of low-level signal regularities, such as the autocorrelation function or power spectrum, to produce a more efficient encoding of the data (Atick & Redlich, 1992). Previous work examined the joint spatio-temporal power spectrum of handheld camera videos and Hollywood movies, showing that power falls as an inverse power-law function of spatial and temporal frequency, with an inseparable relationship (Dong & Attick, 1995). However…

What is Health? allostasis and the evolution of human design
February 14, 2020
Peter Sterling
University of Pennsylvania

(Note: this was a MCB/neuro seminar, not a Redwood Seminar).  Human design is constrained by natural selection to maximize performance for a given energy cost. The brain predicts what will be needed and controls metabolism, physiology, and behavior to deliver just enough, just in time. By preventing errors (allostasis), rather than correcting them (homeostasis), energy is saved. Predictive control is a core function that requires rapid computations by the whole brain to guide its tiny effector (the hypothalamus).

Probabilistic computation in natural vision
February 12, 2020
Ruben Coen-Cagli
Albert Einstein College of Medicine

A central goal of vision science is to understand the principles underlying the perception and cortical encoding of the complex visual environment of our everyday experience. In the visual cortex, foundational work with artificial stimuli, and more recent work combining natural images and deep convolutional neural networks, have revealed much about the tuning of cortical neurons to specific image features. However, a major limitation of this existing work is

Computing in Superposition in the Classical Domain
February 7, 2020
Pentti Kanerva
Redwood Center for Theoretical Neuroscience, UC Berkeley

The standard model of computing treats variables very differently from their values. Variables are addresses to memory, and values are the contents of the addressed memory locations. This model of computing has proved to be extremely successful but also poor for modeling the brain’s computing. Neither does it look like what we see happening in brains. Rather, information is widely distributed over massive circuits, and

Sparse Deep Predictive Coding: a model of visual perception
February 5, 2020
Victor Boutin
Institute of Neuroscience of la Timone (INT), Aix-Marseille University

Building models to efficiently represent images is a central problem in the machine learning community. The brain and especially the visual cortex, has long find economical and robust solutions to solve such a problem. At the local scale, Sparse Coding is one of the most successful framework to model neural computation in the visual cortex. It directly derives from the efficient coding hypothesis, and could be thought as a competitive mechanism that

Visual Cortical Processing:  Image to Object Representation
January 29, 2020
Rudiger von der Heydt
Johns Hopkins University

Image understanding is often conceived as a hierarchical process with many levels where complexity and invariance of object selectivity gradually increase with level in the hierarchy. In contrast, neurophysiological studies have shown that figure-ground organization and border ownership coding, which imply understanding of the object structure of an image, occur at levels as low as V1 and V2 of the

Thesis seminar - The Sparse Manifold Transform and Unsupervised Learning for Signal Representation
December 18, 2019
Yubei Chen
EECS, UC Berkeley

In this talk, I will first present a signal representation framework called the Sparse Manifold Transform that combines key ideas from sparse coding, manifold learning, and slow feature analysis. It turns non-linear transformations in the primary sensory signal space into linear interpolations in a representational embedding space while maintaining approximate invertibility. The sparse manifold transform is an

Visual coding strategies implied by individual differences or adaptation
November 21, 2019
Kara Emery
University of Nevada, Reno

An important goal of vision science is to understand the coding strategies underlying the representation of visual information. I will describe experiments and analyses where we have explored these coding strategies using two different approaches. In the first approach, we factor-analyzed individual differences in observers’ color judgments to reveal the representational structure of color appearance. In contrast to

From paws to hands:  The evolution of the forelimb and cortical areas involved in complex hand use
November 20, 2019
Leah Krubitzer
UC Davis

Forelimb morphology and use in mammals is extraordinarily diverse.  Evolution has produced wings, flippers, hooves, paws and hands which are specialized for a variety of behaviors such as flying, swimming and grasping to name a few. While there is a wealth of data in human and non-human primates on the role of motor cortex and posterior parietal cortical areas in reaching and grasping with the hand, these cortical networks

Neural circuit mechanisms of rapid associative learning
November 6, 2019
Aaron Milstein
Stanford University School of Medicine, Dept. of Neurosurgery

How do neural circuits in the brain accomplish rapid learning? When foraging for food in a previously unexplored environment, animals store memories of landmarks based on as few as one single view. Also, animals remember landmarks and navigation decisions that eventually lead to food, which requires that the brain associate events with delayed outcomes. I will present evidence that a particular neural circuit structure

Rebooting AI: Building Machines we can Trust
September 25, 2019
Gary Marcus
NYU

Artificial intelligence has a trust problem. We are relying on A.I. more and more, but it hasn’t yet earned our confidence. Despite the intense recent hype surrounding AI, no current AI system remotely approaches the flexibility of human intelligence; as I will show, even the ability to read at grade-school level eludes current approaches. Building on my recent synthesis with Ernest Davis, I argue that

Of Rodents And Primates: Comparative Decision Making
September 4, 2019
Pam Reinagel
UC San Diego

In rapid sensory decision-making, the time taken to choose and the accuracy of the choice are related in three distinct ways. First, it takes more time to assess noisy signals, so decisions about weak sensory stimuli are slower, as well as less accurate. Second, for any given stimulus strength, adopting an overall policy of higher stringency will make decisions slower, but more accurate. Third, even when stimulus strength and stringency

Dynamic Programming with Sparse Codes: Investigating a New Computational Role for Sparse Representations of Natural Image Sequences
September 3, 2019
Peter Loxley
University of New England, Armidale, Australia

Dynamic programming (DP) is a general algorithmic approach used in optimal control and Markov decision processes that balances desire for low present costs with undesirability of high future costs when choosing a sequence of controls to apply over time. Recent interest in this field has grown since Google Deepmind’s algorithms

Simplifying Mixture Models with the Hierarchical EM Algorithm, and its Application to Eye Movement Analysis in Psychological Research
July 10, 2019
Antoni Chan and Janet Hsiao
City University of Hong Kong, University of Hong Kong

We propose a hierarchical EM algorithm for simplifying a finite mixture model into a reduced mixture model with fewer mixture components. The reduced model is obtained by maximizing a variational lower bound of the expected log-likelihood of a set of virtual samples. We develop three applications for our mixture

Unifying perceptual and behavioral learning through the synergistic coupling of feedback and feedforward control through counterfactual errors
June 5, 2019
Paul F.M.J. Verschure
Barcelona Institute of Science and Technology

Motor control is usually seen as the result of a gradual replacement of feedback by feedforward control. The perceptual states that inform this process are considered to be defined through qualitatively different processes giving rise to the classical distinction

Informational Appetites + (un)Natural Statistics = “Screen Addiction”
May 15, 2019
William Softky
Visiting scholar, Bioengineering Department, Stanford University

It is a truth not yet universally acknowledged that a self-regulating system which is stable in one environment can become unstable when the environment changes. This truth is called homeostatic fragility.  Mathematically, the key mechanism is sign-reversal, which converts a

Solving Hard Computational Problems using Oscillator Networks
May 1, 2019
Tianshi Wang
EECS, UC Berkeley

Over the last few years, there has been considerable interest in Ising machines, ie, analog hardware for solving difficult (NP hard/complete) computational problems effectively. We present a new way to make Ising machines using networks of coupled self-sustaining

Complexity of linear regions in deep networks
April 24, 2019
David Rolnick
University of Pennsylvania

It is well-known that the expressivity of a neural network depends on its architecture, with deeper networks expressing more complex functions. For ReLU networks, which are piecewise linear, the number of distinct linear regions is a natural measure of expressivity. It is possible to

Over-generalization in humans learning a complex skill
April 17, 2019
Gautam Agarwal
Champalimaud Institute of the Unknown, Lisbon Portugal

Learning a complex skill requires traversing a potentially enormous search space. While reinforcement learning (RL) algorithms can approach human levels of performance in complex tasks, they require much more training to do so. This may be because humans constrain search

Covariant neural network architectures for learning physics
March 21, 2019
Risi Kondor
University of Chicago

Deep neural networks have proved to be extremely effective in image recognition, machine translation, and a variety of other data centered engineering tasks. However, some other domains, such as learning to model physical systems requires a more careful examination of

Biology as Information Dynamics
March 20, 2019
John Baez
UC Riverside

If biology is the study of self-replicating entities, and we want to understand the role of information, it makes sense to see how information theory is connected to the ‘replicator equation’ — a simple model of population dynamics for self-replicating entities. The relevant concept

Church Encoding as the link between Cognition and Neuroscience
February 20, 2019
Steve Piantadosi
UC Berkeley, Dept. of Psychology

I’ll present an approach from mathematical logic which shows how sub-symbolic dynamics may give rise to higher-level cognitive representations of structures, systems of knowledge, and algorithmic processes. This approach posits

Towards a paradigm for investigating the mechanistic basis of perception
February 15, 2019
James Cooke
UCL

Perception has long been considered to be an inference operation in which internal models of the sensory environment are constructed through experience and are subsequently used in order to assign sensory stimuli to

CTRL-labs: Non-Invasive Neural Interfaces for Human Augmentation
February 13, 2019
Patrick Kaifosh
CSO & Co-Founder, CTRL-labs

As the nervous system’s evolved output, spinal motor neuron activity is from an evolutionary perspective a natural source of signals for a neural interface. Furthermore, the amplification of these signals by muscle fibers allows

 

Computer Vision Beyond Recognition
February 6, 2019
Stella Yu
UC Berkeley

Computer vision has advanced rapidly with deep learning, achieving super-human performance on a few recognition benchmarks.  At the core of the state-of-the-art approaches for image classification, object detection, and

Dynamic Neural Fields: the embodiment of neural computation
February 5, 2019
Yulia Sandamirskaya
ETH Zurich

Activity of neuronal populations in several cortical regions can be described by a Dynamic Neural Field (DNF) equation. A DNF is a continuous in time and in space activation function defined over

Representation Learning and Exploration in RL
January 30, 2019
John Co-Reyes
UC Berkeley

Sparse reward and long horizon tasks are among the most interesting yet challenging problems to solve in reinforcement learning. I will discuss recent work leveraging representation learning

Neural Circuit Mechanisms Underlying Cognition in Rats
January 16, 2019
Carlos Brody
Princeton University

I will describe studies of the neural bases of cognitive processes. Rodents, mostly rats, are trained to perform behaviors that lend themselves to quantitative modeling that can help identify

Zoom in on Neural Circuits in Macaque Visual Cortex
November 8, 2018
Shiming Tang
Peking University

My lab focuses on the neural mechanism of visual object recognition and developing techniques for neuronal circuit mapping. We have established long-term two-photon imaging in awake monkeys — the first and critical

Adversarial Examples that Fool both Computer Vision and Time-Limited Humans
November 7, 2018
Gamaleldin Elsayed
Google Brain

Machine learning models are vulnerable to adversarial examples: small changes to images can cause computer vision models to make mistakes such as identifying a school bus as an ostrich. However, it is still an open question whether humans

Projection pursuit in high-dimensions
October 17, 2018
Peter Bickel
UC Berkeley

(Joint work with Gil Kur and Boaz Nadler)

The notion of projection pursuit according to Huber (1985) appeared first in work of J.B. Kruskal (1969,1972) and was implemented and popularized by Friedman and Tukey (1974). Key papers are Huber

A New Benchmark and Progress Toward Improved Weakly Supervised Learning
September 26, 2018
Russ Webb
Apple

A primary goal of this work is to give a clear example of the limits of current, deep-learning techniques and suggest how progress can be made.  The presentation will include a discussion of open questions, unpublished experiments, suggestions on

Learning Representations for Planning
September 18, 2018
Aviv Tamar
UC Berkeley Artificial Intelligence Research lab

How can we build autonomous robots that operate in unstructured and dynamic environments such as homes or hospitals? This problem has been investigated under several disciplines, including planning (motion planning, task planning, etc.), and reinforcement learning. While both of these

Information decomposition
September 17, 2018
Jürgen Jost
MPI for Mathematics in the Sciences, Leipzig

In many situations, two or more sources have some information about a target. For instance, sensory input and context information can jointly determine the firing pattern of a neuron. Since the information from the two sources is typically not identical, one

Correlated neural activity across the brains of socially interacting bats
September 12, 2018
Wujie Zhang
Yartsev Lab, UC Berkeley

Social interaction is fundamental to our everyday life and that of diverse animals. When two animals interact, they behave in different ways. Thus, to get a full picture of the neural activity underlying each interaction, we need to record from the brains of both animals at the same time. We do so in

Two spheres for 3D vision
June 15, 2018
Andrew Glennerster
University of Reading

It is clear that animals do not build a 3D reconstruction of their environment in the way that computer vision does in SLAM systems. I will describe two experiments from our lab (mostly in VR but one reproduced in

Characterizing neurons in the visual area V4 through interpretable machine learning
May 23, 2018
Reza Abbasi-Asl
UC Berkeley

In the past decade, research in machine learning has been exceedingly focused on the development of algorithms and models with remarkably high predictive capabilities. Models such as convolutional neural networks (CNNs) have achieved state-of-the-art predictive performance for

Expander graph architectures for high-capacity neural memory
May 22, 2018
Rishidev Chaudhuri
UT Austin/Simons Institute UC Berkeley

Recent results in information theory show how error-correcting codes on large, sparse graphs (“expander graphs”) can leverage multiple weak constraints to produce near-optimal performance. I will demonstrate a mapping between these error-correcting codes and canonical models of neural memory (“Hopfield networks”), and

Slow Feature Analysis (Biological Modeling and Technical Applications)
April 6, 2018
Laurenz Wiskott
Institut für Neuroinformatik (Ruhr Universität Bochum)

Slow feature analysis (SFA) is a biologically motivated algorithm for extracting slowly varying features from a quickly varying signal and has proven to be a powerful general-purpose preprocessing method for spatio-temporal data in brain modeling as well as technical applications.  We have applied SFA to the learning of

Efficient Balanced Networks
April 5, 2018
Sophie Deneve
University of Paris/Simons Institute

The brain computes with spikes, and spikes are costly to metabolism. Yet, the spiking responses of most cortical cells appear extremely noisy, to the extent that the only feature that repeats from trial to trial is the firing rate (and only for the small minority of cells that are

Informed Approaches to Deep Learning via Neural Networks with Random Parameters
April 3, 2018
Yasaman Bahri
Google Brain

Obtaining a better understanding of neural networks with random parameters is relevant for deep learning practice — for instance, by informing good initializations — and is a natural first step in building a more complete base of knowledge within deep learning. I will survey some of our recent work at Google Brain which

Peripheral Representations for computational models of Human and Machine Perception
March 8, 2018
Arturo Deza
UC Santa Barbara

Are there any benefits in incorporating the foveated nature of human vision into image-based metrics of perception and computer vision systems? In this talk I hope to advance our understanding of this question through

The 1000+ neurons challenge: emergent simplicity in (very) large populations
February 6, 2018
Leenoy Meshulam
Princeton University

Recent technological progress has dramatically increased our access to the neural activity underlying memory-related tasks. These complex high-dimensional data call for theories that allow us to identify signatures of collective activity in the

Sensory Integration, Density Estimation, and Information Retention
January 31, 2018
Joe Makin
UCSF

A common task facing computational scientists and, arguably, the brains of primates more generally is to construct models for data, particularly ones that invoke latent variables. Although it is often natural to identify the

Towards artificial general intelligence: Brain-inspired CAPTCHA breaking and Atari playing
January 24, 2018
Miguel Gredilla
Vicarious, Inc.

Compositionality, generalization, and learning from a few examples are among the hallmarks of human intelligence. In this talk I will describe how Vicarious combines these ideas to create approaches to CAPTCHA breaking and Atari

Looking and seeing in the primary visual cortex
December 13, 2017
Zhaoping Li
University College London

I will present a review of the role of the primary visual cortex V1 in the functions of looking and seeing in vision. Looking is attentional selection, to select a fraction of…

Decoding the computations of high-level auditory neurons
November 29, 2017
Joel Kaardal
Salk Institute

Characterizing the computations performed by high-level sensory regions of the brain remains enigmatic due to the many nonlinear signal transformations that separate the input…

Have We Missed Most of What the Neocortex Does? Allocentric Location as the Basis of Perception
November 16, 2017
Jeff Hawkins
Numenta

In this talk I will describe a theory that sensory regions of the neocortex process two inputs. One input is the well-known sensory data arriving via thalamic relay cells. The second…

Maximum Entropy and the Inference of Patterns in Nature
November 8, 2017
John Harte
UC Berkeley

Constrained maximization of information entropy yields least biased probability distributions. In statistical physics, this powerful inference method yields classical thermodynamics under…

Curiosity-driven Exploration by Self-supervised Prediction
October 11, 2017
Deepak Pathak
UC Berkeley/BAIR

In many real-world scenarios, rewards extrinsic to the agent are extremely sparse, or absent altogether. In such cases, curiosity can serve as an intrinsic reward signal to enable the agent…

A Capacity Scaling Law for Artificial Neural Networks
September 6, 2017
Gerald Friedland
UC Berkeley

In this talk, we derive the calculation of two critical numbers that quantify the capabilities of artificial neural networks with gating functions, such as sign, sigmoid, or rectified…

Discovering Relationships and their Structures Across Disparate Data Modalities
August 16, 2017
Joshua Vogelstein
Johns Hopkins University

Determining whether certain properties are related to other properties is fundamental to scientific discovery. As data collection rates accelerate, it is becoming increasingly…

The Stabilized Supralinear Network, or, The Importance of Being Loosely Balanced (Miller) / Spatiotemporal profiles of spiking variability in recurrent networks (Doiron)
August 15, 2017
Ken Miller and Brent Doiron
Columbia University and University of Pittsburgh

I will describe the Stabilized Supralinear Network mechanism and its application to understanding sensory cortical behavior. The mechanism is based on a network of excitatory…

State Dependent Modulation of Perception Based on a Computational Model of Conditioning
July 18, 2017
Jordi Puigbò
Universitat Pompeu Fabra (Barcelona - Spain)

The embodied mammalian brain evolved to adapt to an only partially known and knowable world. The adaptive labeling of the world is critically dependent on the neocortex which in turn…

Selectivity, hyper-selectivity and gain control: A comparison of non-linear models in the early visual system
July 10, 2017
David Field
Cornell University

I will discuss some implications of an approach that attempts to describe the various non-linearities of neurons in the visual pathway using a geometric framework. This approach…

Capacity and Trainability in Recurrent Neural Networks
June 21, 2017
Jasmine Collins
Google

Two potential bottlenecks on the expressiveness of recurrent neural networks (RNNs) are their ability to store information about the task in their parameters, and to…

The Brain's Circuits Suggest Computing with High-Dimensional Vectors
June 9, 2017
Pentti Kanerva
Redwood Center for Theoretical Neuroscience

Traditional computing is deterministic. It is based on bits and assumes that the bits compute perfectly. However, (near)perfection at high speeds consumes large amounts of…

Cognitive Mapping and Planning for Visual Navigation
June 7, 2017
Saurabh Gupta
UC Berkeley

We introduce a novel neural architecture for navigation in novel environments that learns a cognitive map from first person viewpoints and plans a sequence of actions towards goals in…

Signs of space
June 2, 2017
Alex Terekhov
Paris Descartes University ERC FEEL

I will talk about how naive agent can learn the notion of space ex-nihilo. I am particularly interested in what hints space gives to the agent by constraining the agent’s sensorimotor…

Time-Contrastive Networks: Self-Supervised Learning from Multi-View Observation
May 24, 2017
Pierre Sermanet
Google Brain

We propose a self-supervised approach for learning representations entirely from unlabeled videos recorded from multiple viewpoints. This is particularly relevant to robotic imitation…

The Future of the Multi-core Platform Task-Superscalar Extensions to Von-Neumann Architecture and Optimization for Neural Networks
March 22, 2017
Michael Frank
Magicore Systems

Technology scaling had been carrying computer science thru the second half of the 20th century until single CPU performance started leveling off, after which multi- and…

Elements of Theoretical Neural Science
March 10, 2017
Chris Hillar
Redwood Center for Theoretical Neuroscience

I will present 4 topics from the theory of brain computation: Memory/Encoding, Invariance, Behavioral Rhythms, and Language. Each topic incorporates a particular mathematical model…

Learning-based and behavioural evidence for probabilistic perception in the cortex
March 2, 2017
Jozsef Fiser
Dept. of Cognitive Science, Central European University

The notion of interpreting cortical operations as probabilistic computation has been steadily gaining ground in neuroscience, and with the emergence of the PPC-based and…

Real-Time and Adaptive Auditory Neural Processing
March 1, 2017
Sahar Akram
Starkey Hearing Research Center

Decoding the dynamics of brain activity underlying conscious behavior is one of the key questions in systems neuroscience. Sensory neurons, such as those in the auditory…

Building a platform for machine intelligence
February 13, 2017
Amir Khosrowshahi
Intel Corporation

Deep learning is now state-of-the-art across a wide variety of machine learning domains including speech, video, and text. Nervana is a startup providing deep learning as a platform…

The Neural Code Issue
February 9, 2017
Christoph von der Malsburg
Frankfurt Institute for Advanced Studies and Platonite AG

Let’s go for the real thing: Every waking second, our mind represents for us the situation we are immersed in. How do the physical states of our brain create this mental imagery?

A planning game reveals distributed patterning in player behavior
February 8, 2017
Gautam Agarwal
Champalimaud Neuroscience Program, Portugal

Decision-making has been modeled in great detail based on 2-alternative choice (2AC) tasks; however it remains unclear how these models apply to more naturalistic settings, where…

Computational models of vision: From early vision to deep convolutional neural networks
February 6, 2017
Felix Wichmann
University of Tübingen

Early visual processing has been studied extensively over the last decades. From these studies a relatively standard model emerged of the first steps in visual processing. However…

Intelligent systems which can communicate about what they see
November 30, 2016
Marcus Rohrbach
UC Berkeley

Language is the most important channel for humans to communicate about what they see. To allow an intelligent system to effectively communicate with humans it is thus important to…

Learning to Forecast and Control from Pixels
November 9, 2016
Pulkit Agrawal
EECS, UC Berkeley

The ability to forecast how different objects will be affected by the applied action (i.e. intuitive physics), is likely to be very useful for executing novel manipulation tasks…

Could a neuroscientist understand a microprocessor?
October 26, 2016
Eric Jonas
UC Berkeley
Optimal energy-efficient coding in sensory neurons
October 25, 2016
Douglas L. Jones
ECE Department, University of Illinois at Urbana-Champaign

Evolutionary pressure suggests that the spike-based code in the sensory nervous system should satisfy two opposing constraints: 1) minimize signal distortion in the encoding…

Towards bridging the gap between deep learning and biology
September 27, 2016
Yoshua Bengio
University of Montreal

We explore the following crucial question: how could brains potentially perform the kind of powerful credit assignment that allows hidden layers of a very deep network to be trained…

Evolving the visual brain: computational rules suggested by brain scaling and niche adaptations.
September 8, 2016
Barbara Finlay
Cornell University

Evolution can be thought of as a filter for evolvable developmental architectures. Comparing neuron numbers in the retina, midbrain, thalamus and cortex for a variety of…

Inference using the timing of vocal sequences in songbirds
September 7, 2016
Dan Stowell
Queen Mary, University of London

In songbird vocalisation, the choice and sequencing of units (syllables) are widely studied, and amenable to standard machine learning methods. However, the fine detail in…

Group Equivariant Convolutional Networks
June 27, 2016
Taco Cohen
University of Amsterdam

The Group equivariant Convolutional Neural Network (G-CNN) is a new kind of neural network that obtains better sample complexity by exploiting symmetries. G-CNNs use G-convolutions…

Independently Conscious Brain Regions
June 15, 2016
James Blackmon
San Francisco State University

Conscious minds are sometimes revealed upon the permanent or even temporary loss of a functioning brain hemisphere. We know this because of two medical procedures which have existed…

Using Analogy to Recognize Visual Situations
May 18, 2016
Melanie Mitchell
Portland State University

Enabling computers to recognize abstract visual situations remains a hard open problems in artificial intelligence. No machine vision system comes close to matching human…

Neuromorphic Chips: Combining Analog Computation with Digital Communication
March 23, 2016
Kwabena Boahen
Stanford University

As transistors shrink to nanoscale dimensions, it is becoming increasingly difficult to make the current computing paradigm work. At two-dozen nanometers wide, a transistor’s…

From Texture Synthesis to Neural Art
March 1, 2016
Leon Gatys
University of Tubingen

We introduce a new model of natural textures based on the feature spaces of convolutional neural networks optimised for object recognition. Samples from the model are of high perceptual…

Optimisation of spectral and nonlinear embeddings
February 24, 2016
Miguel Carreira-Perpiñán
UC Merced

One fundamental class of dimensionality reduction algorithms are nonlinear embedding (or manifold learning) algorithms. Their goal is to find meaningful low-dimensional coordinates…

Density Modeling of Images using a Generalized Divisive Normalization Transformation
February 19, 2016
Johannes Ballé
Eero Simoncelli’s lab at NYU

We introduce a parametric nonlinear transformation for jointly Gaussianizing patches of natural images. The transformation is differentiable, can be efficiently inverted…

Hallmarks of Deep Learning in the Brain
February 17, 2016
Andrew Saxe
Harvard University

Anatomically, the brain is deep. To understand the ramifications of depth on learning in the brain requires a clear theory of deep learning. I develop the theory of gradient…

Bio-Inspired Artificial Olfactory System
February 3, 2016
Ping-Chen Huang
EECS, UC Berkeley

The escalating statistical variation of device behaviors in the nano-scale era has led to a search for alternative computational paradigms where randomness and statistics…

The Hebbian normalization model of cortical adaptation
December 16, 2015
Mike Landy
NYU

Adaptation to an oriented stimulus changes both the gain and preferred orientation of neural responses in V1. Neurons tuned near the adapted orientation are suppressed, and their…

Screen Addiction as Runaway De-calibration
December 14, 2015
William Softky & Criscillia Benford
independent consultant (William Softky) and Stanford (Criscillia Benford)

Medical and governmental authorities around the world have identified a host of problematic behaviors with names like internet addiction, gaming addiction, and texting addiction…

Seeing the Earth in the Cloud
December 2, 2015
Steven Brumby
Decartes Labs

The proliferation of transistors has increased the performance of computing systems by over a factor of a million in the past 30 years, and is also dramatically increasing the…

Unexpected roles for inhibitory circuits in the neocortex
November 18, 2015
Hillel Adesnik
UC Berkeley

I will present two studies – one in the barrel cortex and one in the visual cortex – that detail surprising roles for inhibitory circuits in the neocortex. The first will address…

Exploration Biases for Task Learning In Machines and Animals
November 17, 2015
Manuel Lopes
INRIA, Bordeaux

Curiosity is very predominant in people and many animals, but its mechanisms are poorly understood. Motivated by research on neuroscience and child development we want to…

Astrocytes
November 13, 2015
David Zipser
UC Berkeley

Long thought to be the brains janitorial staff, sopping up excess transmitter and turning up the blood supply when the neurons complained, Astrocytes are now known to be doing something…

Robotic Visuomotor Learning
November 4, 2015
Chelsea Finn and Sergey Levine
UC Berkeley

Policy search methods based on reinforcement learning and optimal control can allow robots to automatically learn a wide range of tasks. However, practical applications of policy…

A Deconvolutional Competitive Algorithm (DCA)
October 30, 2015
Garret Kenyon
Los Alamos National Laboratory

The Locally Competitive Algorithm (LCA) is a neurally-plausible sparse solver based on lateral inhibition between leaky integrator neurons. LCA accounts for many linear and nonlinear…

NICE: Nonlinear Independent Components Estimation
October 14, 2015
Laurent Dinh
University of Montreal

Recent advances in deep generative models involved learning complex nonlinear transformations from a simple distribution of independent factors to a more complex distribution…

Building a Large-Scale Neuromorphic Hardware Systems Roadmap
September 8, 2015
Jennifer Hasler
Georgia Institute of Technology

Cognitive Neuromorphic systems are gaining increasing importance in an era where CMOS digital computing techniques are meeting hard physical limits. These silicon systems mimic…

Combinatorial Energy Learning for Image Segmentation
September 2, 2015
Jeremy Maitin-Shepard
Computer Science, UC Berkeley

Recent advances in volume electron microscopy make it possible to image neuronal tissue volumes containining hundreds of thousands of neurons at sufficient resolution to discern…

The Role of Cortical Feedback in Olfactory Processing
July 29, 2015
Gonzalo Otazu
Cold Spring Harbor Laboratory

The olfactory bulb receives rich glutamatergic projections from the piriform cortex. However, the dynamics and importance of these feedback signals remain unknown. In the…

Efficient computation in the brain -- two case studies
July 23, 2015
Xuexin Wei
University of Pennsylvania

It has been long proposed that the brain should perform computation efficiently to increase the fitness of the organism. However, the validity of this prominent..

Noise-enhanced associative memory, creativity, and other problems in faulty information processing
July 22, 2015
Lav Varshney
University of Illinois, Urbana-Champaign

Recent advances in associative memory design through structured pattern sets and graph-based inference algorithms have allowed reliable learning and recall of an…

Discovery of salient low-dimensional dynamical structure in neuronal population activity using Hopfield networks
July 21, 2015
Felix Effenberger
Max Planck Institute

We present a novel method for the classical task of finding and extracting recurring spatiotemporal patterns in recorded spiking activity of neuronal populations. In contrast…

Training and Understanding Deep Neural Networks for Robotics, Design, and Perception
May 29, 2015
Jason Yosinski
Cornell University

Artificial Neural Networks (ANNs) form a powerful class of models with both theoretical and practical advantages. Networks with more than one hidden layer…

Accounting for variability in the awake visual cortex
May 22, 2015
Dan Butts
University of Maryland

Sensory neuron responses in awake cortex can be quite variable across repeated presentations of the same stimulus. In many cortical areas, only a small fraction of…

Combining supervised and unsupervised learning
May 13, 2015
Harri Valpola
ZenRobotics Ltd.

Currently some of the best deep learning methods, e.g. the ones that are beating records on ImageNet, rely purely on supervised learning. Theoretically, it is clear that…

The neural dynamics of Braille letter perception
May 7, 2015
Santani Teng
MIT

Functional changes in visual cortex as a consequence of blindness are a major model for studying crossmodal neuroplasticity. Previous work has shown that…

The BioRC Biomimetic Real-Time Cortex: Focus on Nonlinear Dendritic Computations
April 23, 2015
Alice Parker
USC

This informal seminar will give an overview of the BioRC project, and a discussion of why brain emulation is so difficult. Some breakthroughs in nanotechnology…

From cognitive modeling to labor markets: estimating the economic cost of unrealized human potential
April 8, 2015
Vivienne Ming
Socos, Inc.

Dr. Vivienne Ming, named one of 10 Women to Watch in Tech by Inc. Magazine, is a theoretical neuroscientist, technologist and entrepreneur. She is the co-founder…

Correlated percolation, fractal dimensions, and scale-invariant distribution of clusters in natural images
April 1, 2015
Saeed Saremi
Salk Institute

Natural images are scale invariant. After a quick tutorial on percolation, I will talk about formulating a geometric view of scale invariance based on percolation theory…

Sampling: a probabilistic approach to cortical computation, learning, and development
March 11, 2015
Jozsef Fiser
Department of Cognitive Science, Central European University

I will present a framework and a combined empirical-computational program that explores what cortical neural representation could underlie our intelligent behavior…

V1 disparity tuning and the statistics of disparity in natural viewing
March 4, 2015
Bill Sprague
UC Berkeley

The efficient coding hypothesis broadly predicts that the tuning properties of neurons should reflect the statistics of the signal being encoded. For example, a sparse…

Neural representations of physical space: the benefits of phase precession and multi-scale codes
March 3, 2015
Andreas Herz
Bernstein Center for Computational Neuroscience Munich

Grid cells in rat medial entorhinal cortex discharge when the animal moves through particular regions of the external world. For each grid cell, these regions form…

Neural Circuitry Underlying Contrast Gain Control in Primary Auditory Cortex
March 3, 2015
James Cooke
Auditory Neuroscience Group, Oxford University

While sensory environments can vary dramatically in their statistics, neurons have a limited dynamic range with which they can encode sensory information. In sensory…

Internal model mismatch is responsible for the majority of errors in neuroprosthetic control
February 25, 2015
Steve Chase
Carnegie Mellon University

What enables proficient control of a brain-computer interface (BCI)? In this talk, I will argue that it is our ability to conceptualize a physical model of the…

Boundary contours and 3D surfaces: intermediate representations of objects and scenes
February 11, 2015
Mark Lescroart
UC Berkeley

The human visual system consists of many areas, each of which represents different features or aspects of the visual world. Primary visual cortex represents…

Pitch perception: All in the timing?
February 9, 2015
Andrew Oxenham
University of Minnesota

Pitch is one of the fundamental attributes of auditory perception. Melodies are created by sequential changes in pitch, and harmonies are formed by simultaneous…

Embodied Decision Making: System interactions in sensorimotor adaptation and reinforcement learning
January 28, 2015
Rich Ivry
Dept. of Psychology, UC Berkeley

Two well-established literatures have provided elegant models of sensorimotor adaptation and decision making, with relative little connection between the two. I will discuss…

Clinical Brain Profiling: A Neuro-Computational Psychiatry
January 26, 2015
Abraham Peled
Faculty of Medicine, Technion, Israel Institute of Technology

The DSM5, reveals serious concerns about psychiatric diagnosis. The most serious concern is the failure of DSM to provide a valid ethiopathological psychiatric…

Learning and variability in birdsong
January 21, 2015
Adrienne Fairhall
University of Washington

The birdsong system has become a paradigmatic example of biological learning. In this talk we will discuss how detailed biological responses can help this system to implement…

Using sensorimotor dependencies to understand the nature of perceptual experience and the notion of “body”
January 14, 2015
Kevin O'Regan
CNRS - Université Paris Descartes

The “sensorimotor” theory of perceptual experience suggests that experience of the world necessarily involves understanding the relation between possible actions…

The Bayesian brain, phantom percepts and brain implants
December 9, 2014
Dirk DeRidder
Dundedin School of Medicine, University of Otago, New Zealand

Captain Ahab, following the loss of his leg in a skirmish with the whale Moby Dick perceives phantom pain, and Ludwig von Beethoven, after losing his hearing, perceives…

Consciousness
December 2, 2014
Christof Koch
Allen Brain Institute
Mapping semantic representation in the brain using natural language
November 12, 2014
Alex Huth
Gallant lab, UC Berkeley

Human beings have the unique ability to extract the meaning, or semantic content, from spoken language. Yet little is known about how the semantic content of everyday…

Feature allocations, probability functions, and paintboxes
October 15, 2014
Tamara Broderick
UC Berkeley

Clustering involves placing entities into mutually exclusive categories. We wish to relax the requirement of mutual exclusivity, allowing objects to belong simultaneously…

Long-range and local circuits for top-down modulation of visual cortical processing
October 8, 2014
Siyu Zhang
HWNI, UC Berkeley

Top-down modulation of sensory processing is a prominent cognitive process that allows the animal to select sensory information most relevant to the current task, but…

Propagation and variability of evoked responses: the role of correlated inputs and oscillations.
September 30, 2014
Alejandro Bujan
Bernstein Center, University of Freiburg

The cortex processes information through distributed networks of functionally heterogeneous brain areas. This processing requires spiking activity to be transferred between…

Her Majesty the Data
September 24, 2014
Alyosha Efros
UC Berkeley

While the term “Big Data” has only recently come into vogue, it can be argued that many of the practical advances in fields like computer vision have been driven…

Self-organized criticality, criticality without self-organization and self-organization without criticality
September 19, 2014
Anna Levina
Max Planck Institute for Mathematics in the Sciences, Leipzig, Germany

Self-organized criticality is considered a common phenomenon in nature and became a fascinating research subject for neuroscience, when critical avalanches were predicted…

Computational diversity and the mesoscale organization of the neocortex
September 19, 2014
Gary Marcus
New York University

The human neocortex participates in a wide range of tasks, yet superficially appears to adhere to a relatively uniform six-layered architecture throughout its extent. For that…

Formal Theory of Fun and Creativity and Intrinsic Motivation
August 15, 2014
Jürgen Schmidhuber
IDSIA, Switzerland

I will talk about the active, unsupervised, curious, creative systems we have developed since 1990. They not only learn to solve externally posed tasks, but also…

Information driven self-organization of robotic behavior
August 6, 2014
Georg Martius
Max Planck Institute, Leipzig

Autonomy is a puzzling phenomenon in nature and a major challenge in the world of artifacts. A key feature of autonomy in both natural and artificial systems is seen in…

Unsolved Mysteries of Hippocampal Dynamics
July 23, 2014
Gautam Agarwal
Redwood Center/UC Berkeley

Two radically different forms of electrical activity can be observed in the rat hippocampus: spikes and local field potentials (LFPs). Hippocampal pyramidal neurons…

Volitional control of neural assemblies in L2/3 of motor and somotosensory cortices
July 2, 2014
Kelly Clancy
UC Berkeley

I’ll be talking about a joint effort between the Feldman, Carmena and Costa labs to study abstract task learning by small neuronal assemblies in intact networks. Brain-machine…

The two-dimensional Gabor function adapted to natural image statistics: An analytical model of simple-cell responses in the early visual system
June 25, 2014
Peter Loxley
UNM, Los Alamos Campus; and Center for Nonlinear Studies, LANL

The two-dimensional Gabor function is adapted to natural image statistics by learning the joint distribution of the Gabor function parameters. This joint distribution…

'Tuning the brain’ – Treating mental states through microtubule vibrations
June 11, 2014
Stuart Hammeroff
University of Arizona, Tucson

Do mental states derive entirely from brain neuronal membrane activities? Neuronal interiors are organized by microtubules (‘MTs’), protein polymers proposed to…

Turing Test for Machine Consciousness and the Chaotic Spatiotemporal Fluctuation Hypothesis
April 30, 2014
Masataka Watanabe
University of Tokyo

I propose an experimental method to test various hypotheses on consciousness. Inspired by Sperry’s observation that split-brain patients possess two independent…

Dynamics of visual perception and collective neural activity
April 22, 2014
Jochen Braun
Otto-von-Guericke University, Magdeburg, Germany

Visual perception has all the hallmarks of an ongoing, cooperative-competitive process: probabilistic outcome, self-organization, order-disorder transitions…

Learning Structure in Time Series for Neuroscience and Beyond
April 16, 2014
David Pfau
Columbia University

Data from neuroscience is fiendishly complex. Neurons exhibit correlations on very long timescales and across large populations, and the activity of individual neurons…

Role of Dendritic Computation in the Direction-Selective Circuit of Retina
March 26, 2014
Robert G. Smith
University of Pennsylvania

The retina utilizes a variety of signal processing mechanisms to compute direction from image motion. The computation is accomplished by a circuit that includes starburst…

State-dependent Networks: Timing and Computations Based on Neural Dynamics and Short-term Plasticity
March 19, 2014
Dean Buonomano
UCLA

The brain’s ability to seamlessly assimilate and process spatial and temporal information is critical to most behaviors, from understanding speech to playing the piano…

Neural Representations of Language Meaning
March 14, 2014
Tom Mitchell
Computer Science Dept., Carnegie Mellon University

How does the human brain use neural activity to create and represent meanings of words, sentences and stories? One way to study this question is to give people text to…

Circuit defects in the neocortex of Fmr1 knockout mice
March 12, 2014
Carlos Portera-Cailliau
UCLA

Subtle alterations in how cortical network dynamics are modulated by different behavioral states could disrupt normal brain function and underlie symptoms of neuropsychiatric…

Self-organization in balanced state networks by STDP and homeostatic plasticity
March 6, 2014
Felix Effenberger
Max Planck Institut for Mathematics in the Sciences

Structural inhomogeneities have a strong impact on population response dynamics of cortical networks and are believed to play an important role in their functioning. However…

Constructing space: how a naive agent can learn spatial relationships by observing sensorimotor contingencies
February 25, 2014
Alexander Terekhov
Laboratory of Psychology of Perception, Paris Descartes University (Paris 5)

The brain sitting inside its bony cavity sends and receives myriads of sensory inputs and outputs. A problem that must be solved either in ontogeny or phylogeny is how to extract…

Continuous vector representations for machine translation
February 12, 2014
Ilya Sutskever
Google

Dictionaries and phrase tables are the basis of modern statistical machine translation systems. I will present a method that can automate the process of generating and extending dictionaries…

Current technological applications in sensory learning
January 29, 2014
David Klein
Audience, Inc.

Our computing devices are outfitted with more sensors and dedicated sensor processors than ever before, spanning audio, image, motion, touch, climate, and more. Meanwhile, the technology industry…

Orthogonal Sparse Coding and Sensing
January 22, 2014
Thomas Martinetz
University of Luebeck

Sparse Coding has been a very successful concept since many natural signals have the property of being sparse in some dictionary (basis). Some natural signals are even sparse in an orthogonal…

Characterizing Short-Term Memory for Musical Timbre
December 11, 2013
Kai Siedenburg & Stephen McAdams
Centre for Interdisciplinary Research in Music Media and Technology (CIRMMT), Schulich School of Music, McGill University, Montreal, QC, Canada

Short-term memory is a cognitive faculty central for the apprehension of music and speech. Only little is known, however, about memory for musical timbre despite its “sisterhood” with speech…

What Are the Invariant Occlusive Components of Image Patches? A Probabilistic Generative Approach
December 4, 2013
Zhenwen Dai
FIAS, Goethe University Frankfurt, Germany.

We study optimal image encoding based on a generative approach with non-linear feature combinations and explicit position encoding. By far most approaches to unsupervised learning of…

Using Locally Competitive Algorithms to Model Top-Down and Lateral Interactions
November 6, 2013
Garrett Kenyon
Los Alamos National Laboratory

Cortical connections consist of feedforward, feedback and lateral pathways. Infragranular layers project down the cortical hierarchy to both supra- and infragranular layers…

Beyond Deep Learning: Scalable Methods and Models for Learning
October 31, 2013
Oriol Vinyals
Computer Science, UC Berkeley

In this talk I will briefly describe several techniques I explored in my thesis that improve how to efficiently model signal representations and learn useful information from…

Large N in neural data -- expecting the unexpected
October 30, 2013
Ilya Nemenman
Departments of Physics and Biology, Emory University

Recently it has become possible to directly measure simultaneous collective states of many biological components, such as neural activities, genetic sequences, or gene expression…

Understanding the building blocks of neural computation: Insights from connectomics and theory
October 29, 2013
Mitya Chklovskii
Janelia Farm, HHMI

Animal behaviour arises from computations in neuronal circuits, but our understanding of these computations has been frustrated by the lack of detailed synaptic connection…

Multiscale modeling in neuroscience: first steps towards multiscale co-simulation tool development
October 9, 2013
Ekaterina Brocke
KTH University, Stockholm, Sweden

Multiscale modeling/simulations attracts an increasing number of neuroscientists to study how different levels of organization (networks of neurons, cellular/subcellular levels) interact…

VSA: Vector Symbolic Architectures for Cognitive Computing in Neural Networks
June 14, 2013
Ross Gayler
PhD in psychology from University of Queensland

This talk is about computing with discrete compositional data structures in analog computers. A core issue for both computer science and cognitive neuroscience is the degree…

Transforming sensory inputs into motor acts: Insights from looking, reaching and speaking
May 22, 2013
Bijan Pesaran
Center for Neural Science, NYU

Sensory motor integration involves transforming a pattern of sensory input to a motor output and is a core neural operation carried out by all nervous systems. Analyses of…

Joint Redwood/CNEP seminar: Internal model estimation for closed-loop brain-computer interfaces
May 15, 2013
Byron Yu
Carnegie Mellon University

The motor system successfully plans and executes sophisticated movements despite sensory feedback delays and effector dynamics that change over time. Behavioral studies suggest…

Scalable Neuroscience and the Brain Activity Mapping Project
April 19, 2013
Tom Dean
Google

Since the beginning of the year, the European Union and United States have separately announced major initiatives in brain science. The latter is called the Brain Activity…

Statistical Models of Binaural Sounds
April 17, 2013
Wiktor Młynarski
Max Planck Institute for Mathematics in the Sciences

The auditory system exploits disparities in the sounds arriving at the left and right ear to extract information about the spatial configuration of sound sources. According to…

Attention at the cocktail party: Neural bases and computational strategies for auditory scene analysis
April 9, 2013
Mounya Elhilali
Johns Hopkins

The perceptual organization of sounds in the environment into coherent objects is a feat constantly facing the auditory system. It manifests itself in the everyday challenge…

How Visual Evolution Determines What We See
March 27, 2013
Dale Purves
Duke University

Information about the physical world is excluded from visual stimuli by the nature of biological vision (the inverse optics problem). Nonetheless, humans and other visual…

Bifurcations and phase-locking dynamics in the auditory system
February 20, 2013
Dolores Bozovic
UCLA

The inner ear constitutes a remarkable biological sensor that exhibits nanometer-scale sensitivity of mechanical detection. The first step in auditory processing is…

Empirical statistical analysis of phases in Gabor filtered natural images
February 7, 2013
Valero Laparra
University of Valencia

The talk will show the results of an empirical statistical analysis of images processed by complex Gabor-like filters. The analysis intends to be a compilation of statistical…

Relating neurons to perception in stereo vision
January 30, 2013
Jenny Read
Newcastle University

Stereo “3D” vision refers to the depth perception we have by virtue of viewing the world through two slightly offset eyes. This ability is receiving attention at the moment…

Hierarchical Curiosity Loops – Model, Behavior and Robotics
January 29, 2013
Goren Gordon
Weizman Institute

Autonomously learning about one’s own body and its interaction with the environment is a formidable challenge, yet it is ubiquitous in biology: every animal’s pup and…

Neural substrates of decision-making in the rat
January 23, 2013
Carlos Brody
Princeton University

Gradual accumulation of evidence is thought to be a fundamental component of decision-making. Over the last 16 years, research in non-human primates has revealed neural…

An exactly solvable model of Maxwell’s demon
January 14, 2013
Dibyendu Mandal
Physics Dept., University of Maryland

The paradox of Maxwell’s demon has stimulated numerous thought experiments, leading to discussions about the thermodynamic implications of information processing…

Quantum cognition and brain microtubules
January 7, 2013
Stuart Hammeroff
University of Arizona

Cognitive decision processes are generally seen as classical Bayesian probabilities, but better suited to quantum mathematics. For example: 1) Psychological conflict…

Probabilistic models of object shapes
December 14, 2012
Ali Eslami
University of Edinburgh

We address the question of how to build a ‘strong’ probabilistic model of object shapes (binary silhouettes). We define a strong model as one which meets two requirements…

Joint Training Deep Boltzmann Machines for Classification
December 12, 2012
Ian Goodfellow
University of Montreal

The traditional deep Boltzmann machine training algorithm requires a greedy layerwise pretraining phase. Existing techniques for avoiding greedy pretraining do not perform…

Learning visual motion in recurrent neural networks
December 10, 2012
Marius Pachitariu
Gatsby Institute, University College London

We present a dynamic nonlinear generative model for visual motion based on a latent representation of binary-gated Gaussian variables connected in a network. Trained on…

Efficient coding of images and sounds: Hierarchical processing and biological constraints
November 30, 2012
Yan Karklin
Center for Neural Science, NYU

Efficient coding provides a powerful principle for explaining early sensory processing. Among the successful applications of this theory are models that provide functional…

Identifying human inductive biases
November 7, 2012
Tom Griffiths
UC Berkeley

People are remarkably good at acquiring complex knowledge from limited data, as is required in learning causal relationships, categories, or aspects of language…

Is General Anesthesia a failure of cortical information integration?
October 23, 2012
Jaimie Sleigh
University of Auckland

General anesthesia and natural sleep share some commonalities and some differences. Quite a lot is known about the chemical and neuronal effects of general anesthetic…

Balanced spiking networks can implement dynamical systems with predictive coding
October 8, 2012
Sophie Deneve
Laboratoire de Neurosciences cognitives, ENS-INSERM

Neural networks can integrate sensory information and generate continuously varying outputs, even though individual neurons communicate only with spikes—all-or-none…

The Development of White Matter and Reading Skills
September 26, 2012
Jason Yeatman
Department of Psychology, Stanford University

The development of cerebral white matter involves both myelination and pruning of axons, and the balance between these two processes may differ between individuals. Cross-sectional measures of…

Hierarchical models of natural images
July 30, 2012
Lucas Theis
Werner Reichardt Centre for Integrative Neuroscience, Tübingen

Probabilistic models of natural images have been used to solve a variety of computer vision tasks as well as a means to better understand the computations…

The Neural Binding Problem(s)
June 27, 2012
Jerome Feldman
ICSI/Computer Science, UC Berkeley

The famous Neural Binding Problem (NBP) comprises at least four distinct problems with different computational and neural requirements. This talk will review the…

Understanding neural coding
June 7, 2012
Mitya Chklovskii
HHMI, Janelia Farm

The efficient coding hypothesis states that the front end properties of sensory systems, such as visual, can be understood from the statistics…

Integration and gating of sensory information is achieved by a single cortical circuit with orthogonal mixed representations
March 14, 2012
David Sussillo & Valerio Mante
Stanford University

Computations in neural circuits are inherently flexible, allowing humans and animals to respond to sensory stimuli with actions that are appropriate in a given context. Fundamental to…

From Learning Models of Natural Image Patches to Whole Image Restoration
March 1, 2012
Daniel Zoran
The Hebrew University of Jerusalem

Learning good image priors is of utmost importance for the study of vision, computer vision and image processing applications. Learning priors…

Sparse high order interaction networks underlie learnable neural population codes
February 22, 2012
Elad Schneidman
Department of Neurobiology, Weizmann Institute of Science
What Hemodynamics can and cannot tell us about neural activity in the brain
January 24, 2012
Aniruddha Das
Columbia University

Brain imaging is based on measuring not neural activity but rather, brain hemodynamics – local changes in blood volume, blood flow and oxygenation. These hemodynamic…

Subjective Contours
January 11, 2012
Ken Nakayama
Department of Psychology, Harvard University

The concept of the receptive field in visual science has been transformative. It fueled great discoveries of the second half of the 20th C, providing the dominant understanding of how the visual system works…

How the unstable eye sees a stable and moving world
December 14, 2011
Austin Roorda
School of Optometry, UC Berkeley

How is it that the eye can have an exquisite sense of motion even while the retinal image of the stable world during fixation is in constant motion? Several hypotheses have arisen: The “efference-copy” hypothesis holds that efferent signals derived

Reconstructing visual experiences from brain activity evoked by natural movies.
October 26, 2011
Shinji Nishimoto
UC Berkeley

Quantitative modeling of human brain activity can provide crucial insights about cortical representations and can form the basis for brain decoding devices. Recent functional magnetic resonance imaging (fMRI) studies have…

Design of a Semantic Type System to Facilitate Data Sharing and Analysis Tool Reuse
October 19, 2011
Graham Cummins
Washington State University

Data sharing between labs, and indeed between disciplines, reduces duplication of effort, facilitates new discoveries, and leads to the development of more flexible, reliable, and reproducible analysis techniques. Initially, a data…

Predictive power, memory and dissipation in learning systems operating far from thermodynamic equilibrium
October 5, 2011
Susanne Still
University of Hawaii, Manoa

Understanding the physical processes that underly the functioning of biological computing machinery often requires describing processes that occur far from thermodynamic equilibrium. In recent years…

On the unity of perception: How does the brain integrate activity evoked at different cortical loci?
September 27, 2011
Moshe Gur
Dept. of Biomedical Engineering, Technion, Israel Institute of Technology

Any physical device we know, including computers, when comparing A to B must send the information to point C. I have done experiments in three modalities, somato-sensory, auditory, and visual, where…

Directing Cortical Plasticity to Understand and Treat Neurological Disease
September 21, 2011
Michael P. Kilgard
Professor of Neuroscience, University of Texas at Dallas

Even simple experiences activate large numbers of neurons in the central nervous system. It is not at all clear how many neurons are needed to generate a sensory percept or how activity among these neurons leads…

Multi-aperture computational imaging systems for depth, scientific analysis and human perception
September 15, 2011
Kathrin Berkner
Ricoh Innovations, Inc.

The design of complete imaging systems using a joint design framework have led to significant achievements in terms of reduced system size and cost or enriched imaging features. At Ricoh Innovations we are designing…

Explaining tuning curves by estimating interactions between neurons
May 26, 2011
Ian Stevenson
Northwestern University

One of the central tenets of systems neuroscience is that tuning curves are a byproduct of the interactions between neurons. Using multi-electrode recordings and recently developed inference techniques we can begin…