Loading...

Filters
Sort by
Seminars & Colloquia

Live and recorded talks from the researchers shaping this domain.

20 items
Seminar
GMT+1

FLUXSynID: High-Resolution Synthetic Face Generation for Document and Live Capture Images

Synthetic face datasets are increasingly used to overcome the limitations of real-world biometric data, including privacy concerns, demographic imbalance, and high collection costs. However, many existing methods lack fine-grained control over identity attributes and fail to produce paired, identity-consistent images under structured capture conditions. In this talk, I will present FLUXSynID, a framework for generating high-resolution synthetic face datasets with user-defined identity attribute distributions and paired document-style and trusted live capture images. The dataset generated using FLUXSynID shows improved alignment with real-world identity distributions and greater diversity compared to prior work. I will also discuss how FLUXSynID’s dataset and generation tools can support research in face recognition and morphing attack detection (MAD), enhancing model robustness in both academic and practical applications.

Speaker

Raul Ismayilov • University of Twente

Scheduled for

Jul 1, 2025, 2:00 PM

Timezone

GMT+1

Seminar
GMT+2

Memory formation in hippocampal microcircuit

The centre of memory is the medial temporal lobe (MTL) and especially the hippocampus. In our research, a more flexible brain-inspired computational microcircuit of the CA1 region of the mammalian hippocampus was upgraded and used to examine how information retrieval could be affected under different conditions. Six models (1-6) were created by modulating different excitatory and inhibitory pathways. The results showed that the increase in the strength of the feedforward excitation was the most effective way to recall memories. In other words, that allows the system to access stored memories more accurately.

Speaker

Andreakos Nikolaos • Visiting Scientist, School of Computer Science, University of Lincoln, Scientific Associate, National and Kapodistrian University of Athens

Scheduled for

Feb 6, 2025, 12:00 PM

Timezone

GMT+2

Seminar
GMT+1

Decision and Behavior

This webinar addressed computational perspectives on how animals and humans make decisions, spanning normative, descriptive, and mechanistic models. Sam Gershman (Harvard) presented a capacity-limited reinforcement learning framework in which policies are compressed under an information bottleneck constraint. This approach predicts pervasive perseveration, stimulus‐independent “default” actions, and trade-offs between complexity and reward. Such policy compression reconciles observed action stochasticity and response time patterns with an optimal balance between learning capacity and performance. Jonathan Pillow (Princeton) discussed flexible descriptive models for tracking time-varying policies in animals. He introduced dynamic Generalized Linear Models (Sidetrack) and hidden Markov models (GLM-HMMs) that capture day-to-day and trial-to-trial fluctuations in choice behavior, including abrupt switches between “engaged” and “disengaged” states. These models provide new insights into how animals’ strategies evolve under learning. Finally, Kenji Doya (OIST) highlighted the importance of unifying reinforcement learning with Bayesian inference, exploring how cortical-basal ganglia networks might implement model-based and model-free strategies. He also described Japan’s Brain/MINDS 2.0 and Digital Brain initiatives, aiming to integrate multimodal data and computational principles into cohesive “digital brains.”

Speaker

Sam Gershman, Jonathan Pillow, Kenji Doya • Harvard University; Princeton University; Okinawa Institute of Science and Technology

Scheduled for

Nov 28, 2024, 2:00 PM

Timezone

GMT+1

Seminar
GMT-3

A modular, free and open source graphical interface for visualizing and processing electrophysiological signals in real-time

Portable biosensors become more popular every year. In this context, I propose NeuriGUI, a modular and cross-platform graphical interface that connects to those biosensors for real-time processing, exploring and storing of electrophysiological signals. The NeuriGUI acts as a common entry point in brain-computer interfaces, making it possible to plug in downstream third-party applications for real-time analysis of the incoming signal. NeuriGUI is 100% free and open source.

Speaker

David Baum • Research Engineer at InteraXon

Scheduled for

May 27, 2024, 12:00 PM

Timezone

GMT-3

Seminar
GMT+1

Enhancing Qualitative Coding with Large Language Models: Potential and Challenges

Qualitative coding is the process of categorizing and labeling raw data to identify themes, patterns, and concepts within qualitative research. This process requires significant time, reflection, and discussion, often characterized by inherent subjectivity and uncertainty. Here, we explore the possibility to leverage large language models (LLM) to enhance the process and assist researchers with qualitative coding. LLMs, trained on extensive human-generated text, possess an architecture that renders them capable of understanding the broader context of a conversation or text. This allows them to extract patterns and meaning effectively, making them particularly useful for the accurate extraction and coding of relevant themes. In our current approach, we employed the chatGPT 3.5 Turbo API, integrating it into the qualitative coding process for data from the SWISS100 study, specifically focusing on data derived from centenarians' experiences during the Covid-19 pandemic, as well as a systematic centenarian literature review. We provide several instances illustrating how our approach can assist researchers with extracting and coding relevant themes. With data from human coders on hand, we highlight points of convergence and divergence between AI and human thematic coding in the context of these data. Moving forward, our goal is to enhance the prototype and integrate it within an LLM designed for local storage and operation (LLaMa). Our initial findings highlight the potential of AI-enhanced qualitative coding, yet they also pinpoint areas requiring attention. Based on these observations, we formulate tentative recommendations for the optimal integration of LLMs in qualitative coding research. Further evaluations using varied datasets and comparisons among different LLMs will shed more light on the question of whether and how to integrate these models into this domain.

Speaker

Kim Uittenhove & Olivier Mucchiut • AFC Lab / University of Lausanne

Scheduled for

Oct 15, 2023, 10:30 AM

Timezone

GMT+1

Seminar
GMT+1

Internet interventions targeting grief symptoms

Web-based self-help interventions for coping with prolonged grief have established their efficacy. However, few programs address recent losses and investigate the effect of self-tailoring of the content. In an international project, the text-based self-help program LIVIA was adapted and complemented with an Embodied Conversational Agent, an initial risk assessment and a monitoring tool. The new program SOLENA was evaluated in three trials in Switzerland, the Netherlands and Portugal. The aim of the trials was to evaluate the clinical efficacy for reducing grief, depression and loneliness and to examine client satisfaction and technology acceptance. The talk will present the SOLENA program and report results of the Portuguese and Dutch trial as well as preliminary results of the Swiss RCT. The ongoing Swiss trial compares a standardised to a self-tailored delivery format and analyses clinical outcomes, the helpfulness of specific content and the working alliance. Finally, lessons learned in the development and evaluation of a web-based self-help intervention for older adults will be discusses.

Speaker

Jeannette Brodbeck • Fachhochschule Nordwestschweiz / University of Bern

Scheduled for

Sep 24, 2023, 1:00 PM

Timezone

GMT+1

Seminar
EDT

The Neural Race Reduction: Dynamics of nonlinear representation learning in deep architectures

What is the relationship between task, network architecture, and population activity in nonlinear deep networks? I will describe the Gated Deep Linear Network framework, which schematizes how pathways of information flow impact learning dynamics within an architecture. Because of the gating, these networks can compute nonlinear functions of their input. We derive an exact reduction and, for certain cases, exact solutions to the dynamics of learning. The reduction takes the form of a neural race with an implicit bias towards shared representations, which then govern the model’s ability to systematically generalize, multi-task, and transfer. We show how appropriate network architectures can help factorize and abstract knowledge. Together, these results begin to shed light on the links between architecture, learning dynamics and network performance.

Speaker

Andrew Saxe • UCL

Scheduled for

Apr 13, 2023, 12:30 PM

Timezone

EDT

Seminar
GMT+11

Linking GWAS to pharmacological treatments for psychiatric disorders

Genome-wide association studies (GWAS) have identified multiple disease-associated genetic variations across different psychiatric disorders raising the question of how these genetic variants relate to the corresponding pharmacological treatments. In this talk, I will outline our work investigating whether functional information from a range of open bioinformatics datasets such as protein interaction network (PPI), brain eQTL, and gene expression pattern across the brain can uncover the relationship between GWAS-identified genetic variation and the genes targeted by current drugs for psychiatric disorders. Focusing on four psychiatric disorders---ADHD, bipolar disorder, schizophrenia, and major depressive disorder---we assess relationships between the gene targets of drug treatments and GWAS hits and show that while incorporating information derived from functional bioinformatics data, such as the PPI network and spatial gene expression, can reveal links for bipolar disorder, the overall correspondence between treatment targets and GWAS-implicated genes in psychiatric disorders rarely exceeds null expectations. This relatively low degree of correspondence across modalities suggests that the genetic mechanisms driving the risk for psychiatric disorders may be distinct from the pathophysiological mechanisms used for targeting symptom manifestations through pharmacological treatments and that novel approaches for understanding and treating psychiatric disorders may be required.

Speaker

Aurina Arnatkeviciute • Monash University

Scheduled for

Aug 18, 2022, 1:00 PM

Timezone

GMT+11

Seminar
GMT+9

A Framework for a Conscious AI: Viewing Consciousness through a Theoretical Computer Science Lens

We examine consciousness from the perspective of theoretical computer science (TCS), a branch of mathematics concerned with understanding the underlying principles of computation and complexity, including the implications and surprising consequences of resource limitations. We propose a formal TCS model, the Conscious Turing Machine (CTM). The CTM is influenced by Alan Turing's simple yet powerful model of computation, the Turing machine (TM), and by the global workspace theory (GWT) of consciousness originated by cognitive neuroscientist Bernard Baars and further developed by him, Stanislas Dehaene, Jean-Pierre Changeux, George Mashour, and others. However, the CTM is not a standard Turing Machine. It’s not the input-output map that gives the CTM its feeling of consciousness, but what’s under the hood. Nor is the CTM a standard GW model. In addition to its architecture, what gives the CTM its feeling of consciousness is its predictive dynamics (cycles of prediction, feedback and learning), its internal multi-modal language Brainish, and certain special Long Term Memory (LTM) processors, including its Inner Speech and Model of the World processors. Phenomena generally associated with consciousness, such as blindsight, inattentional blindness, change blindness, dream creation, and free will, are considered. Explanations derived from the model draw confirmation from consistencies at a high level, well above the level of neurons, with the cognitive neuroscience literature. Reference. L. Blum and M. Blum, "A theory of consciousness from a theoretical computer science perspective: Insights from the Conscious Turing Machine," PNAS, vol. 119, no. 21, 24 May 2022. https://www.pnas.org/doi/epdf/10.1073/pnas.2115934119

Speaker

Lenore and Manuel Blum • Carnegie Mellon University

Scheduled for

Aug 4, 2022, 10:00 AM

Timezone

GMT+9

Seminar
EDT

Optimal information loading into working memory in prefrontal cortex

Working memory involves the short-term maintenance of information and is critical in many tasks. The neural circuit dynamics underlying working memory remain poorly understood, with different aspects of prefrontal cortical (PFC) responses explained by different putative mechanisms. By mathematical analysis, numerical simulations, and using recordings from monkey PFC, we investigate a critical but hitherto ignored aspect of working memory dynamics: information loading. We find that, contrary to common assumptions, optimal information loading involves inputs that are largely orthogonal, rather than similar, to the persistent activities observed during memory maintenance. Using a novel, theoretically principled metric, we show that PFC exhibits the hallmarks of optimal information loading and we find that such dynamics emerge naturally as a dynamical strategy in task-optimized recurrent neural networks. Our theory unifies previous, seemingly conflicting theories of memory maintenance based on attractor or purely sequential dynamics, and reveals a normative principle underlying the widely observed phenomenon of dynamic coding in PFC.

Speaker

Maté Lengyel • University of Cambridge, UK

Scheduled for

Jun 21, 2022, 11:00 AM

Timezone

EDT

Seminar
EDT

Molecular Logic of Synapse Organization and Plasticity

Connections between nerve cells called synapses are the fundamental units of communication and information processing in the brain. The accurate wiring of neurons through synapses into neural networks or circuits is essential for brain organization. Neuronal networks are sculpted and refined throughout life by constant adjustment of the strength of synaptic communication by neuronal activity, a process known as synaptic plasticity. Deficits in the development or plasticity of synapses underlie various neuropsychiatric disorders, including autism, schizophrenia and intellectual disability. The Siddiqui lab research program comprises three major themes. One, to assess how biochemical switches control the activity of synapse organizing proteins, how these switches act through their binding partners and how these processes are regulated to correct impaired synaptic function in disease. Two, to investigate how synapse organizers regulate the specificity of neuronal circuit development and how defined circuits contribute to cognition and behaviour. Three, to address how synapses are formed in the developing brain and maintained in the mature brain and how microcircuits formed by synapses are refined to fine-tune information processing in the brain. Together, these studies have generated fundamental new knowledge about neuronal circuit development and plasticity and enabled us to identify targets for therapeutic intervention.

Speaker

Tabrez Siddiqui • University of Manitoba

Scheduled for

May 30, 2022, 4:00 PM

Timezone

EDT

Seminar
EDT

Turning spikes to space: The storage capacity of tempotrons with plastic synaptic dynamics

Neurons in the brain communicate through action potentials (spikes) that are transmitted through chemical synapses. Throughout the last decades, the question how networks of spiking neurons represent and process information has remained an important challenge. Some progress has resulted from a recent family of supervised learning rules (tempotrons) for models of spiking neurons. However, these studies have viewed synaptic transmission as static and characterized synaptic efficacies as scalar quantities that change only on slow time scales of learning across trials but remain fixed on the fast time scales of information processing within a trial. By contrast, signal transduction at chemical synapses in the brain results from complex molecular interactions between multiple biochemical processes whose dynamics result in substantial short-term plasticity of most connections. Here we study the computational capabilities of spiking neurons whose synapses are dynamic and plastic, such that each individual synapse can learn its own dynamics. We derive tempotron learning rules for current-based leaky-integrate-and-fire neurons with different types of dynamic synapses. Introducing ordinal synapses whose efficacies depend only on the order of input spikes, we establish an upper capacity bound for spiking neurons with dynamic synapses. We compare this bound to independent synapses, static synapses and to the well established phenomenological Tsodyks-Markram model. We show that synaptic dynamics in principle allow the storage capacity of spiking neurons to scale with the number of input spikes and that this increase in capacity can be traded for greater robustness to input noise, such as spike time jitter. Our work highlights the feasibility of a novel computational paradigm for spiking neural circuits with plastic synaptic dynamics: Rather than being determined by the fixed number of afferents, the dimensionality of a neuron's decision space can be scaled flexibly through the number of input spikes emitted by its input layer.

Speaker

Robert Guetig • Charité – Universitätsmedizin Berlin & BIH

Scheduled for

Mar 8, 2022, 11:00 AM

Timezone

EDT

Seminar
EDT

Integrators in short- and long-term memory

The accumulation and storage of information in memory is a fundamental computation underlying animal behavior. In many brain regions and task paradigms, ranging from motor control to navigation to decision-making, such accumulation is accomplished through neural integrator circuits that enable external inputs to move a system’s population-wide patterns of neural activity along a continuous attractor. In the first portion of the talk, I will discuss our efforts to dissect the circuit mechanisms underlying a neural integrator from a rich array of anatomical, physiological, and perturbation experiments. In the second portion of the talk, I will show how the accumulation and storage of information in long-term memory may also be described by attractor dynamics, but now within the space of synaptic weights rather than neural activity. Altogether, this work suggests a conceptual unification of seemingly distinct short- and long-term memory processes.

Speaker

Mark Goldman • UC Davis

Scheduled for

Mar 1, 2022, 11:00 AM

Timezone

EDT

Seminar
GMT+1

How does the metabolically-expensive mammalian brain adapt to food scarcity?

Information processing is energetically expensive. In the mammalian brain, it is unclear how information coding and energy usage are regulated during food scarcity. I addressed this in the visual cortex of awake mice using whole-cell recordings and two-photon imaging to monitor layer 2/3 neuronal activity and ATP usage. I found that food restriction reduced synaptic ATP usage by 29% through a decrease in AMPA receptor conductance. Neuronal excitability was nonetheless preserved by a compensatory increase in input resistance and a depolarized resting membrane potential. Consequently, neurons spiked at similar rates as controls, but spent less ATP on underlying excitatory currents. This energy-saving strategy had a cost since it amplified the variability of visually-evoked subthreshold responses, leading to a 32% broadening in orientation tuning and impaired fine visual discrimination. This reduction in coding precision was associated with reduced levels of the fat mass-regulated hormone leptin and was restored by exogenous leptin supplementation. These findings reveal novel mechanisms that dynamically regulate energy usage and coding precision in neocortex.

Speaker

Zahid Padamsey • Rochefort lab, University of Edinburgh

Scheduled for

Feb 22, 2022, 5:35 PM

Timezone

GMT+1

Seminar
GMT+1

Dissecting the role of accumbal D1 and D2 medium spiny neurons in information encoding

Nearly all motivated behaviors require the ability to associate outcomes with specific actions and make adaptive decisions about future behavior. The nucleus accumbens (NAc) is integrally involved in these processes. The NAc is a heterogeneous population primarily composed of D1 and D2 medium spiny projection (MSN) neurons that are thought to have opposed roles in behavior, with D1 MSNs promoting reward and D2 MSNs promoting aversion. Here we examined what types of information are encoded by the D1 and D2 MSNs using optogenetics, fiber photometry, and cellular resolution calcium imaging. First, we showed that mice responded for optical self-stimulation of both cell types, suggesting D2-MSN activation is not inherently aversive. Next, we recorded population and single cell activity patterns of D1 and D2 MSNs during reinforcement as well as Pavlovian learning paradigms that allow dissociation of stimulus value, outcome, cue learning, and action. We demonstrated that D1 MSNs respond to the presence and intensity of unconditioned stimuli – regardless of value. Conversely, D2 MSNs responded to the prediction of these outcomes during specific cues. Overall, these results provide foundational evidence for the discrete aspects of information that are encoded within the NAc D1 and D2 MSN populations. These results will significantly enhance our understanding of the involvement of the NAc MSNs in learning and memory as well as how these neurons contribute to the development and maintenance of substance use disorders.

Speaker

Munir Gunes Kutlu • Calipari Lab, Vanderbilt University

Scheduled for

Feb 8, 2022, 5:00 PM

Timezone

GMT+1

Seminar
GMT+11

Inferring informational structures in neural recordings of drosophila with epsilon-machines

Measuring the degree of consciousness an organism possesses has remained a longstanding challenge in Neuroscience. In part, this is due to the difficulty of finding the appropriate mathematical tools for describing such a subjective phenomenon. Current methods relate the level of consciousness to the complexity of neural activity, i.e., using the information contained in a stream of recorded signals they can tell whether the subject might be awake, asleep, or anaesthetised. Usually, the signals stemming from a complex system are correlated in time; the behaviour of the future depends on the patterns in the neural activity of the past. However these past-future relationships remain either hidden to, or not taken into account in the current measures of consciousness. These past-future correlations are likely to contain more information and thus can reveal a richer understanding about the behaviour of complex systems like a brain. Our work employs the "epsilon-machines” framework to account for the time correlations in neural recordings. In a nutshell, epsilon-machines reveal how much of the past neural activity is needed in order to accurately predict how the activity in the future will behave, and this is summarised in a single number called "statistical complexity". If a lot of past neural activity is required to predict the future behaviour, then can we say that the brain was more “awake" at the time of recording? Furthermore, if we read the recordings in reverse, does the difference between forward and reverse-time statistical complexity allow us to quantify the level of time asymmetry in the brain? Neuroscience predicts that there should be a degree of time asymmetry in the brain. However, this has never been measured. To test this, we used neural recordings measured from the brains of fruit flies and inferred the epsilon-machines. We found that the nature of the past and future correlations of neural activity in the brain, drastically changes depending on whether the fly was awake or anaesthetised. Not only does our study find that wakeful and anaesthetised fly brains are distinguished by how statistically complex they are, but that the amount of correlations in wakeful fly brains was much more sensitive to whether the neural recordings were read forward vs. backwards in time, compared to anaesthetised brains. In other words, wakeful fly brains were more complex, and time asymmetric than anaesthetised ones.

Speaker

Roberto Muñoz • Monash University

Scheduled for

Dec 9, 2021, 12:00 PM

Timezone

GMT+11

Seminar
GMT-3

CaImAn: large-scale batch and online analysis of calcium imaging data

Advances in fluorescence microscopy enable monitoring larger brain areas in-vivo with finer time resolution. The resulting data rates require reproducible analysis pipelines that are reliable, fully automated, and scalable to datasets generated over the course of months. We present CaImAn, an open-source library for calcium imaging data analysis. CaImAn provides automatic and scalable methods to address problems common to pre-processing, including motion correction, neural activity identification, and registration across different sessions of data collection. It does this while requiring minimal user intervention, with good scalability on computers ranging from laptops to high-performance computing clusters. CaImAn is suitable for two-photon and one-photon imaging, and also enables real-time analysis on streaming data. To benchmark the performance of CaImAn we collected and combined a corpus of manual annotations from multiple labelers on nine mouse two-photon datasets. We demonstrate that CaImAn achieves near-human performance in detecting locations of active neurons.

Speaker

Andrea Giovannucci • University of North Carolina at Chapel Hill

Scheduled for

Dec 7, 2021, 12:00 PM

Timezone

GMT-3

Seminar
EDT

Suboptimal human inference inverts the bias-variance trade-off for decisions with asymmetric evidence

Solutions to challenging inference problems are often subject to a fundamental trade-off between bias (being systematically wrong) that is minimized with complex inference strategies and variance (being oversensitive to uncertain observations) that is minimized with simple inference strategies. However, this trade-off is based on the assumption that the strategies being considered are optimal for their given complexity and thus has unclear relevance to the frequently suboptimal inference strategies used by humans. We examined inference problems involving rare, asymmetrically available evidence, which a large population of human subjects solved using a diverse set of strategies that were suboptimal relative to the Bayesian ideal observer. These suboptimal strategies reflected an inversion of the classic bias-variance trade-off: subjects who used more complex, but imperfect, Bayesian-like strategies tended to have lower variance but high bias because of incorrect tuning to latent task features, whereas subjects who used simpler heuristic strategies tended to have higher variance because they operated more directly on the observed samples but displayed weaker, near-normative bias. Our results yield new insights into the principles that govern individual differences in behavior that depends on rare-event inference, and, more generally, about the information-processing trade-offs that are sensitive to not just the complexity, but also the optimality of the inference process.

Speaker

Tahra Eissa • University of Colorado Boulder

Scheduled for

Nov 30, 2021, 11:00 AM

Timezone

EDT

Seminar
GMT-3

GuPPy, a Python toolbox for the analysis of fiber photometry data

Fiber photometry (FP) is an adaptable method for recording in vivo neural activity in freely behaving animals. It has become a popular tool in neuroscience due to its ease of use, low cost, the ability to combine FP with freely moving behavior, among other advantages. However, analysis of FP data can be a challenge for new users, especially those with a limited programming background. Here, we present Guided Photometry Analysis in Python (GuPPy), a free and open-source FP analysis tool. GuPPy is provided as a Jupyter notebook, a well-commented interactive development environment (IDE) designed to operate across platforms. GuPPy presents the user with a set of graphic user interfaces (GUIs) to load data and provide input parameters. Graphs produced by GuPPy can be exported into various image formats for integration into scientific figures. As an open-source tool, GuPPy can be modified by users with knowledge of Python to fit their specific needs.

Speaker

Talia Lerner • Northwestern University

Scheduled for

Nov 23, 2021, 12:00 PM

Timezone

GMT-3

Seminar
GMT+11

Aesthetic preference for art can be predicted from a mixture of low- and high-level visual features

It is an open question whether preferences for visual art can be lawfully predicted from the basic constituent elements of a visual image. Here, we developed and tested a computational framework to investigate how aesthetic values are formed. We show that it is possible to explain human preferences for a visual art piece based on a mixture of low- and high-level features of the image. Subjective value ratings could be predicted not only within but also across individuals, using a regression model with a common set of interpretable features. We also show that the features predicting aesthetic preference can emerge hierarchically within a deep convolutional neural network trained only for object recognition. Our findings suggest that human preferences for art can be explained at least in part as a systematic integration over the underlying visual features of an image.

Speaker

John O'Doherty • California Institute of Technology

Scheduled for

Nov 11, 2021, 12:00 PM

Timezone

GMT+11