Loading...

Filters
Sort by
Seminars & Colloquia

Live and recorded talks from the researchers shaping this domain.

17 items
Seminar
GMT

Reimagining the neuron as a controller: A novel model for Neuroscience and AI

We build upon and expand the efficient coding and predictive information models of neurons, presenting a novel perspective that neurons not only predict but also actively influence their future inputs through their outputs. We introduce the concept of neurons as feedback controllers of their environments, a role traditionally considered computationally demanding, particularly when the dynamical system characterizing the environment is unknown. By harnessing a novel data-driven control framework, we illustrate the feasibility of biological neurons functioning as effective feedback controllers. This innovative approach enables us to coherently explain various experimental findings that previously seemed unrelated. Our research has profound implications, potentially revolutionizing the modeling of neuronal circuits and paving the way for the creation of alternative, biologically inspired artificial neural networks.

Speaker

Dmitri 'Mitya' Chklovskii • Flatiron Institute, Center for Computational Neuroscience

Scheduled for

Feb 4, 2024, 3:00 PM

Timezone

GMT

Seminar
GMT+9

Asymmetric signaling across the hierarchy of cytoarchitecture within the human connectome

Cortical variations in cytoarchitecture form a sensory-fugal axis that shapes regional profiles of extrinsic connectivity and is thought to guide signal propagation and integration across the cortical hierarchy. While neuroimaging work has shown that this axis constrains local properties of the human connectome, it remains unclear whether it also shapes the asymmetric signaling that arises from higher-order topology. Here, we used network control theory to examine the amount of energy required to propagate dynamics across the sensory-fugal axis. Our results revealed an asymmetry in this energy, indicating that bottom-up transitions were easier to complete compared to top-down. Supporting analyses demonstrated that asymmetries were underpinned by a connectome topology that is wired to support efficient bottom-up signaling. Lastly, we found that asymmetries correlated with differences in communicability and intrinsic neuronal time scales and lessened throughout youth. Our results show that cortical variation in cytoarchitecture may guide the formation of macroscopic connectome topology.

Speaker

Linden Parkes • Rutgers Brain Health Institute

Scheduled for

Mar 22, 2023, 7:00 AM

Timezone

GMT+9

Seminar
GMT+1

Trading Off Performance and Energy in Spiking Networks

Many engineered and biological systems must trade off performance and energy use, and the brain is no exception. While there are theories on how activity levels are controlled in biological networks through feedback control (homeostasis), it is not clear what the effects on population coding are, and therefore how performance and energy can be traded off. In this talk we will consider this tradeoff in auto-encoding networks, in which there is a clear definition of performance (the coding loss). We first show how SNNs follow a characteristic trade-off curve between activity levels and coding loss, but that standard networks need to be retrained to achieve different tradeoff points. We next formalize this tradeoff with a joint loss function incorporating coding loss (performance) and activity loss (energy use). From this loss we derive a class of spiking networks which coordinates its spiking to minimize both the activity and coding losses -- and as a result can dynamically adjust its coding precision and energy use. The network utilizes several known activity control mechanisms for this --- threshold adaptation and feedback inhibition --- and elucidates their potential function within neural circuits. Using geometric intuition, we demonstrate how these mechanisms regulate coding precision, and thereby performance. Lastly, we consider how these insights could be transferred to trained SNNs. Overall, this work addresses a key energy-coding trade-off which is often overlooked in network studies, expands on our understanding of homeostasis in biological SNNs, as well as provides a clear framework for considering performance and energy use in artificial SNNs.

Speaker

Sander Keemink • Donders Institute for Brain, Cognition and Behaviour

Scheduled for

May 31, 2022, 3:30 PM

Timezone

GMT+1

Seminar
EDT

Learning in/about/from the basal ganglia

The basal ganglia are a collection of brain areas that are connected by a variety of synaptic pathways and are a site of significant reward-related dopamine release. These properties suggest a possible role for the basal ganglia in action selection, guided by reinforcement learning. In this talk, I will discuss a framework for how this function might be performed and computational results using an upward mapping to identify putative low-dimensional control ensembles that may be involved in tuning decision policy. I will also present some recent experimental results and theory – related to effects of extracellular ion dynamics -- that run counter to the classical view of basal ganglia pathways and suggest a new interpretation of certain aspects of this framework. For those not so interested in the basal ganglia, I hope that the upward mapping approach and impact of extracellular ion dynamics will nonetheless be of interest!

Speaker

Jonathan Rubin • University of Pittsburgh

Scheduled for

May 24, 2022, 11:00 AM

Timezone

EDT

Seminar
GMT

Sensing in Insect Wings

Ali Weber (University of Washington, USA) uses the the hawkmoth as a model system, to investigate how information from a small number of mechanoreceptors on the wings are used in flight control. She employs a combination of experimental and computational techniques to study how these sensors respond during flight and how one might optimally array a set of these sensors to best provide feedback during flight.

Speaker

Ali Weber • University of Washington, USA

Scheduled for

Apr 18, 2022, 3:00 PM

Timezone

GMT

Seminar
EDT

Taming chaos in neural circuits

Neural circuits exhibit complex activity patterns, both spontaneously and in response to external stimuli. Information encoding and learning in neural circuits depend on the ability of time-varying stimuli to control spontaneous network activity. In particular, variability arising from the sensitivity to initial conditions of recurrent cortical circuits can limit the information conveyed about the sensory input. Spiking and firing rate network models can exhibit such sensitivity to initial conditions that are reflected in their dynamic entropy rate and attractor dimensionality computed from their full Lyapunov spectrum. I will show how chaos in both spiking and rate networks depends on biophysical properties of neurons and the statistics of time-varying stimuli. In spiking networks, increasing the input rate or coupling strength aids in controlling the driven target circuit, which is reflected in both a reduced trial-to-trial variability and a decreased dynamic entropy rate. With sufficiently strong input, a transition towards complete network state control occurs. Surprisingly, this transition does not coincide with the transition from chaos to stability but occurs at even larger values of external input strength. Controllability of spiking activity is facilitated when neurons in the target circuit have a sharp spike onset, thus a high speed by which neurons launch into the action potential. I will also discuss chaos and controllability in firing-rate networks in the balanced state. For these, external control of recurrent dynamics strongly depends on correlations in the input. This phenomenon was studied with a non-stationary dynamic mean-field theory that determines how the activity statistics and the largest Lyapunov exponent depend on frequency and amplitude of the input, recurrent coupling strength, and network size. This shows that uncorrelated inputs facilitate learning in balanced networks. The results highlight the potential of Lyapunov spectrum analysis as a diagnostic for machine learning applications of recurrent networks. They are also relevant in light of recent advances in optogenetics that allow for time-dependent stimulation of a select population of neurons.

Speaker

Rainer Engelken • Columbia University

Scheduled for

Feb 22, 2022, 11:00 AM

Timezone

EDT

Seminar
PDT

Free will beyond spontaneous volition: Conscious control processes of inhibition and attention in self-control and free will

Polaris Koi (Philosophy) and Jake Gavenas (Neuroscience) begin the seminar by arguing that agentive control is the key requirement for free will, drawing on folk-philosophy findings to support this claim (Gavenas et al., in prep). They explore how two executive control processes that functionally involve consciousness—inhibition and top-down control of attention—connect self-control and free will.

Speaker

Timothy Bayne/Polaris Koi/Jake Gavenas • Monash University/University of Turku/Chapman University

Scheduled for

Feb 14, 2022, 11:00 AM

Timezone

PDT

Seminar
PDT

Towards model-based control of active matter: active nematics and oscillator networks

The richness of active matter's spatiotemporal patterns continues to capture our imagination. Shaping these emergent dynamics into pre-determined forms of our choosing is a grand challenge in the field. To complicate matters, multiple dynamical attractors can coexist in such systems, leading to initial condition-dependent dynamics. Consequently, non-trivial spatiotemporal inputs are generally needed to access these states. Optimal control theory provides a general framework for identifying such inputs and represents a promising computational tool for guiding experiments and interacting with various systems in soft active matter and biology. As an exemplar, I first consider an extensile active nematic fluid confined to a disk. In the absence of control, the system produces two topological defects that perpetually circulate. Optimal control identifies a time-varying active stress field that restructures the director field, flipping the system to its other attractor that rotates in the opposite direction. As a second, analogous case, I examine a small network of coupled Belousov-Zhabotinsky chemical oscillators that possesses two dominant attractors, two wave states of opposing chirality. Optimal control similarly achieves the task of attractor switching. I conclude with a few forward-looking remarks on how the same model-based control approach might come to bear on problems in biology.

Speaker

Michael Norton • Rochester Institute of Technology

Scheduled for

Jan 30, 2022, 9:00 AM

Timezone

PDT

Seminar
EDT

The organization of neural representations for control

Cognitive control allows us to think and behave flexibly based on our context and goals. Most theories of cognitive control propose a control representation that enables the same input to produce different outputs contingent on contextual factors. In this talk, I will focus on an important property of the control representation's neural code: its representational dimensionality. Dimensionality of a neural representation balances a basic separability/generalizability trade-off in neural computation. This tradeoff has important implications for cognitive control. In this talk, I will present initial evidence from fMRI and EEG showing that task representations in the human brain leverage both ends of this tradeoff during flexible behavior.

Speaker

David Badre • Brown University

Scheduled for

Dec 9, 2021, 12:00 PM

Timezone

EDT

Seminar
EDT

NMC4 Short Talk: What can deep reinforcement learning tell us about human motor learning and vice-versa ?

In the deep reinforcement learning (RL) community, motor control problems are usually approached from a reward-based learning perspective. However, humans are often believed to learn motor control through directed error-based learning. Within this learning setting, the control system is assumed to have access to exact error signals and their gradients with respect to the control signal. This is unlike reward-based learning, in which errors are assumed to be unsigned, encoding relative successes and failures. Here, we try to understand the relation between these two approaches, reward- and error- based learning, and ballistic arm reaches. To do so, we test canonical (deep) RL algorithms on a well-known sensorimotor perturbation in neuroscience: mirror-reversal of visual feedback during arm reaching. This test leads us to propose a potentially novel RL algorithm, denoted as model-based deterministic policy gradient (MB-DPG). This RL algorithm draws inspiration from error-based learning to qualitatively reproduce human reaching performance under mirror-reversal. Next, we show MB-DPG outperforms the other canonical (deep) RL algorithms on a single- and a multi- target ballistic reaching task, based on a biomechanical model of the human arm. Finally, we propose MB-DPG may provide an efficient computational framework to help explain error-based learning in neuroscience.

Speaker

Michele Garibbo • University of Bristol

Scheduled for

Nov 30, 2021, 3:30 PM

Timezone

EDT

Seminar
EDT

NMC4 Keynote: A network perspective on cognitive effort

Cognitive effort has long been an important explanatory factor in the study of human behavior in health and disease. Yet, the biophysical nature of cognitive effort remains far from understood. In this talk, I will offer a network perspective on cognitive effort. I will begin by canvassing a recent perspective that casts cognitive effort in the framework of network control theory, developed and frequently used in systems engineering. The theory describes how much energy is required to move the brain from one activity state to another, when activity is constrained to pass along physical pathways in a connectome. I will then turn to empirical studies that link this theoretical notion of energy with cognitive effort in a behaviorally demanding task, and with a metabolic notion of energy as accessible to FDG-PET imaging. Finally, I will ask how this structurally-constrained activity flow can provide us with insights about the brain’s non-equilibrium nature. Using a general tool for quantifying entropy production in macroscopic systems, I will provide evidence to suggest that states of marked cognitive effort are also states of greater entropy production. Collectively, the work I discuss offers a complementary view of cognitive effort as a dynamical process occurring atop a complex network.

Speaker

Dani Bassett • University of Pennsylvania

Scheduled for

Nov 30, 2021, 9:00 AM

Timezone

EDT

Seminar
GMT+11

Designing temporal networks that synchronize under resource constraints

Being fundamentally a non-equilibrium process, synchronization comes with unavoidable energy costs and has to be maintained under the constraint of limited resources. Such resource constraints are often reflected as a finite coupling budget available in a network to facilitate interaction and communication. In this talk, I will show that introducing temporal variation in the network structure can lead to efficient synchronization even when stable synchrony is impossible in any static network under the given budget. Our strategy is based on an open-loop control scheme and alludes to a fundamental advantage of temporal networks. Whether this advantage of temporality can be utilized in the brain is an interesting open question.

Speaker

Yuanzhao Zhang • Santa Fe Institute

Scheduled for

Oct 21, 2021, 12:00 PM

Timezone

GMT+11

Seminar
GMT

Credit Assignment in Neural Networks through Deep Feedback Control

The success of deep learning sparked interest in whether the brain learns by using similar techniques for assigning credit to each synaptic weight for its contribution to the network output. However, the majority of current attempts at biologically-plausible learning methods are either non-local in time, require highly specific connectivity motives, or have no clear link to any known mathematical optimization method. Here, we introduce Deep Feedback Control (DFC), a new learning method that uses a feedback controller to drive a deep neural network to match a desired output target and whose control signal can be used for credit assignment. The resulting learning rule is fully local in space and time and approximates Gauss-Newton optimization for a wide range of feedback connectivity patterns. To further underline its biological plausibility, we relate DFC to a multi-compartment model of cortical pyramidal neurons with a local voltage-dependent synaptic plasticity rule, consistent with recent theories of dendritic processing. By combining dynamical system theory with mathematical optimization theory, we provide a strong theoretical foundation for DFC that we corroborate with detailed results on toy experiments and standard computer-vision benchmarks.

Speaker

Alexander Meulemans • Institute of Neuroinformatics, University of Zürich and ETH Zürich

Scheduled for

Sep 29, 2021, 2:00 PM

Timezone

GMT

Seminar
EDT

Stability-Flexibility Dilemma in Cognitive Control: A Dynamical System Perspective

Constraints on control-dependent processing have become a fundamental concept in general theories of cognition that explain human behavior in terms of rational adaptations to these constraints. However, theories miss a rationale for why such constraints would exist in the first place. Recent work suggests that constraints on the allocation of control facilitate flexible task switching at the expense of the stability needed to support goal-directed behavior in face of distraction. We formulate this problem in a dynamical system, in which control signals are represented as attractors and in which constraints on control allocation limit the depth of these attractors. We derive formal expressions of the stability-flexibility tradeoff, showing that constraints on control allocation improve cognitive flexibility but impair cognitive stability. We provide evidence that human participants adapt higher constraints on the allocation of control as the demand for flexibility increases but that participants deviate from optimal constraints. In continuing work, we are investigating how collaborative performance of a group of individuals can benefit from individual differences defined in terms of balance between cognitive stability and flexibility.

Speaker

Naomi Leonard • Princeton University

Scheduled for

Mar 25, 2021, 12:00 PM

Timezone

EDT

Seminar
GMT

Stochastic control of passive colloidal objects by micro-swimmers

The way single colloidal objects behave in presence of active forces arising from within the bulk of the system is crucial to many situations, notably biological and ecological (e.g. intra-cellular transport, predation), and potential medical or environmental applications (e.g. targeted delivery of cargoes, depollution of waters and soils). In this talk I will present experimental findings that my collaborators and I have obtained over the past years on the dynamics of single Brownian colloids in suspensions of biological micro-swimmers, especially the green alga Chlamydomonas reinhardtii. I'll show notably that spatial heterogeneities and anisotropies in the active particles statistics can control the preferential localisation of their passive counterparts. The results will be rationalized using theoretical approaches from hydrodynamics and stochastic processes.

Speaker

Raphael Jeanneret • University of Warwick

Scheduled for

Dec 1, 2020, 4:00 PM

Timezone

GMT

Seminar
GMT+2

On climate change, multi-agent systems and the behaviour of networked control

Multi-agent reinforcement learning (MARL) has recently shown great promise as an approach to networked system control. Arguably, one of the most difficult and important tasks for which large scale networked system control is applicable is common-pool resource (CPR) management. Crucial CPRs include arable land, fresh water, wetlands, wildlife, fish stock, forests and the atmosphere, of which proper management is related to some of society’s greatest challenges such as food security, inequality and climate change. This talk will consist of three parts. In the first, we will briefly look at climate change and how it poses a significant threat to life on our planet. In the second, we will consider the potential of multi-agent systems for climate change mitigation and adaptation. And finally, in the third, we will discuss recent research from InstaDeep into better understanding the behaviour of networked MARL systems used for CPR management. More specifically, we will see how the tools from empirical game-theoretic analysis may be harnessed to analyse the differences in networked MARL systems. The results give new insights into the consequences associated with certain design choices and provide an additional dimension of comparison between systems beyond efficiency, robustness, scalability and mean control performance.

Speaker

Arnu Pretorius • InstaDeep

Scheduled for

Nov 17, 2020, 5:30 PM

Timezone

GMT+2

Seminar
GMT

Deep learning for model-based RL

Model-based approaches to control and decision making have long held the promise of being more powerful and data efficient than model-free counterparts. However, success with model-based methods has been limited to those cases where a perfect model can be queried. The game of Go was mastered by AlphaGo using a combination of neural networks and the MCTS planning algorithm. But planning required a perfect representation of the game rules. I will describe new algorithms that instead leverage deep neural networks to learn models of the environment which are then used to plan, and update policy and value functions. These new algorithms offer hints about how brains might approach planning and acting in complex environments.

Speaker

Timothy Lillicrap • Google Deep Mind, University College London

Scheduled for

Jun 11, 2020, 2:00 PM

Timezone

GMT