Loading...

Filters
Sort by
Seminars & Colloquia

Live and recorded talks from the researchers shaping this domain.

20 items
Seminar
GMT-3

Introduction to protocols.io: Scientific collaboration through open protocols

Research articles and laboratory protocol organization often lack detailed instructions for replicating experiments. protocols.io is an open-access platform where researchers collaboratively create dynamic, interactive, step-by-step protocols that can be executed on mobile devices or the web. Researchers can easily and efficiently share protocols with colleagues, collaborators, the scientific community, or make them public. Real-time communication and interaction keep protocols up to date. Public protocols receive a DOI and enable open communication with authors and researchers to foster efficient experimentation and reproducibility.

Speaker

Lenny Teytelman • Founder & President of protocols.io

Scheduled for

Sep 24, 2025, 11:00 AM

Timezone

GMT-3

Seminar
EDT

OpenNeuro FitLins GLM: An Accessible, Semi-Automated Pipeline for OpenNeuro Task fMRI Analysis

In this talk, I will discuss the OpenNeuro Fitlins GLM package and provide an illustration of the analytic workflow. OpenNeuro FitLins GLM is a semi-automated pipeline that reduces barriers to analyzing task-based fMRI data from OpenNeuro's 600+ task datasets. Created for psychology, psychiatry and cognitive neuroscience researchers without extensive computational expertise, this tool automates what is largely a manual process and compilation of in-house scripts for data retrieval, validation, quality control, statistical modeling and reporting that, in some cases, may require weeks of effort. The workflow abides by open-science practices, enhancing reproducibility and incorporates community feedback for model improvement. The pipeline integrates BIDS-compliant datasets and fMRIPrep preprocessed derivatives, and dynamically creates BIDS Statistical Model specifications (with Fitlins) to perform common mass univariate [GLM] analyses. To enhance and standardize reporting, it generates comprehensive reports which includes design matrices, statistical maps and COBIDAS-aligned reporting that is fully reproducible from the model specifications and derivatives. OpenNeuro Fitlins GLM has been tested on over 30 datasets spanning 50+ unique fMRI tasks (e.g., working memory, social processing, emotion regulation, decision-making, motor paradigms), reducing analysis times from weeks to hours when using high-performance computers, thereby enabling researchers to conduct robust single-study, meta- and mega-analyses of task fMRI data with significantly improved accessibility, standardized reporting and reproducibility.

Speaker

Michael Demidenko • Stanford University

Scheduled for

Jul 31, 2025, 10:00 AM

Timezone

EDT

Seminar
GMT

Understanding reward-guided learning using large-scale datasets

Understanding the neural mechanisms of reward-guided learning is a long-standing goal of computational neuroscience. Recent methodological innovations enable us to collect ever larger neural and behavioral datasets. This presents opportunities to achieve greater understanding of learning in the brain at scale, as well as methodological challenges. In the first part of the talk, I will discuss our recent insights into the mechanisms by which zebra finch songbirds learn to sing. Dopamine has been long thought to guide reward-based trial-and-error learning by encoding reward prediction errors. However, it is unknown whether the learning of natural behaviours, such as developmental vocal learning, occurs through dopamine-based reinforcement. Longitudinal recordings of dopamine and bird songs reveal that dopamine activity is indeed consistent with encoding a reward prediction error during naturalistic learning. In the second part of the talk, I will talk about recent work we are doing at DeepMind to develop tools for automatically discovering interpretable models of behavior directly from animal choice data. Our method, dubbed CogFunSearch, uses LLMs within an evolutionary search process in order to "discover" novel models in the form of Python programs that excel at accurately predicting animal behavior during reward-guided learning. The discovered programs reveal novel patterns of learning and choice behavior that update our understanding of how the brain solves reinforcement learning problems.

Speaker

Kim Stachenfeld • DeepMind, Columbia U

Scheduled for

Jul 8, 2025, 2:00 PM

Timezone

GMT

Seminar
GMT+1

FLUXSynID: High-Resolution Synthetic Face Generation for Document and Live Capture Images

Synthetic face datasets are increasingly used to overcome the limitations of real-world biometric data, including privacy concerns, demographic imbalance, and high collection costs. However, many existing methods lack fine-grained control over identity attributes and fail to produce paired, identity-consistent images under structured capture conditions. In this talk, I will present FLUXSynID, a framework for generating high-resolution synthetic face datasets with user-defined identity attribute distributions and paired document-style and trusted live capture images. The dataset generated using FLUXSynID shows improved alignment with real-world identity distributions and greater diversity compared to prior work. I will also discuss how FLUXSynID’s dataset and generation tools can support research in face recognition and morphing attack detection (MAD), enhancing model robustness in both academic and practical applications.

Speaker

Raul Ismayilov • University of Twente

Scheduled for

Jul 1, 2025, 2:00 PM

Timezone

GMT+1

Seminar
GMT-3

Open SPM: A Modular Framework for Scanning Probe Microscopy

OpenSPM aims to democratize innovation in the field of scanning probe microscopy (SPM), which is currently dominated by a few proprietary, closed systems that limit user-driven development. Our platform includes a high-speed OpenAFM head and base optimized for small cantilevers, an OpenAFM controller, a high-voltage amplifier, and interfaces compatible with several commercial AFM systems such as the Bruker Multimode, Nanosurf DriveAFM, Witec Alpha SNOM, Zeiss FIB-SEM XB550, and Nenovision Litescope. We have created a fully documented and community-driven OpenSPM platform, with training resources and sourcing information, which has already enabled the construction of more than 15 systems outside our lab. The controller is integrated with open-source tools like Gwyddion, HDF5, and Pycroscopy. We have also engaged external companies, two of which are integrating our controller into their products or interfaces. We see growing interest in applying parts of the OpenSPM platform to related techniques such as correlated microscopy, nanoindentation, and scanning electron/confocal microscopy. To support this, we are developing more generic and modular software, alongside a structured development workflow. A key feature of the OpenSPM system is its Python-based API, which makes the platform fully scriptable and ideal for AI and machine learning applications. This enables, for instance, automatic control and optimization of PID parameters, setpoints, and experiment workflows. With a growing contributor base and industry involvement, OpenSPM is well positioned to become a global, open platform for next-generation SPM innovation.

Speaker

Marcos Penedo Garcia • Senior scientist, LBNI-IBI, EPFL Lausanne, Switzerland

Scheduled for

Jun 23, 2025, 10:00 AM

Timezone

GMT-3

Seminar
GMT+1

How Generative AI is Revolutionizing the Software Developer Industry

Generative AI is fundamentally transforming the software development industry by improving processes such as software testing, bug detection, bug fixes, and developer productivity. This talk explores how AI-driven techniques, particularly large language models (LLMs), are being utilized to generate realistic test scenarios, automate bug detection and repair, and streamline development workflows. As these technologies evolve, they promise to improve software quality and efficiency significantly. The discussion will cover key methodologies, challenges, and the future impact of generative AI on the software development lifecycle, offering a comprehensive overview of its revolutionary potential in the industry.

Speaker

Luca Di Grazia • Università della Svizzera Italiana

Scheduled for

Sep 30, 2024, 12:00 PM

Timezone

GMT+1

Seminar
GMT-3

A modular, free and open source graphical interface for visualizing and processing electrophysiological signals in real-time

Portable biosensors become more popular every year. In this context, I propose NeuriGUI, a modular and cross-platform graphical interface that connects to those biosensors for real-time processing, exploring and storing of electrophysiological signals. The NeuriGUI acts as a common entry point in brain-computer interfaces, making it possible to plug in downstream third-party applications for real-time analysis of the incoming signal. NeuriGUI is 100% free and open source.

Speaker

David Baum • Research Engineer at InteraXon

Scheduled for

May 27, 2024, 12:00 PM

Timezone

GMT-3

Seminar
GMT

Generative models for video games (rescheduled)

Developing agents capable of modeling complex environments and human behaviors within them is a key goal of artificial intelligence research. Progress towards this goal has exciting potential for applications in video games, from new tools that empower game developers to realize new creative visions, to enabling new kinds of immersive player experiences. This talk focuses on recent advances of my team at Microsoft Research towards scalable machine learning architectures that effectively capture human gameplay data. In the first part of my talk, I will focus on diffusion models as generative models of human behavior. Previously shown to have impressive image generation capabilities, I present insights that unlock applications to imitation learning for sequential decision making. In the second part of my talk, I discuss a recent project taking ideas from language modeling to build a generative sequence model of an Xbox game.

Speaker

Katja Hoffman • Microsoft Research

Scheduled for

May 21, 2024, 2:00 PM

Timezone

GMT

Seminar
GMT

Modelling the fruit fly brain and body

Through recent advances in microscopy, we now have an unprecedented view of the brain and body of the fruit fly Drosophila melanogaster. We now know the connectivity at single neuron resolution across the whole brain. How do we translate these new measurements into a deeper understanding of how the brain processes sensory information and produces behavior? I will describe two computational efforts to model the brain and the body of the fruit fly. First, I will describe a new modeling method which makes highly accurate predictions of neural activity in the fly visual system as measured in the living brain, using only measurements of its connectivity from a dead brain [1], joint work with Jakob Macke. Second, I will describe a whole body physics simulation of the fruit fly which can accurately reproduce its locomotion behaviors, both flight and walking [2], joint work with Google DeepMind.

Speaker

Srinivas Turaga • HHMI | Janelia

Scheduled for

May 14, 2024, 2:00 PM

Timezone

GMT

Seminar
GMT

Generative models for video games

Developing agents capable of modeling complex environments and human behaviors within them is a key goal of artificial intelligence research. Progress towards this goal has exciting potential for applications in video games, from new tools that empower game developers to realize new creative visions, to enabling new kinds of immersive player experiences. This talk focuses on recent advances of my team at Microsoft Research towards scalable machine learning architectures that effectively capture human gameplay data. In the first part of my talk, I will focus on diffusion models as generative models of human behavior. Previously shown to have impressive image generation capabilities, I present insights that unlock applications to imitation learning for sequential decision making. In the second part of my talk, I discuss a recent project taking ideas from language modeling to build a generative sequence model of an Xbox game.

Speaker

Katja Hoffman • Microsoft Research

Scheduled for

Apr 30, 2024, 2:00 PM

Timezone

GMT

Seminar
EDT

Current and future trends in neuroimaging

With the advent of several different fMRI analysis tools and packages outside of the established ones (i.e., SPM, AFNI, and FSL), today's researcher may wonder what the best practices are for fMRI analysis. This talk will discuss some of the recent trends in neuroimaging, including design optimization and power analysis, standardized analysis pipelines such as fMRIPrep, and an overview of current recommendations for how to present neuroimaging results. Along the way we will discuss the balance between Type I and Type II errors with different correction mechanisms (e.g., Threshold-Free Cluster Enhancement and Equitable Thresholding and Clustering), as well as considerations for working with large open-access databases.

Speaker

Andy Jahn • fMRI Lab, University of Michigan

Scheduled for

Dec 6, 2023, 2:30 AM

Timezone

EDT

Seminar
EDT

Trends in NeuroAI - SwiFT: Swin 4D fMRI Transformer

Trends in NeuroAI is a reading group hosted by the MedARC Neuroimaging & AI lab (https://medarc.ai/fmri). Title: SwiFT: Swin 4D fMRI Transformer Abstract: Modeling spatiotemporal brain dynamics from high-dimensional data, such as functional Magnetic Resonance Imaging (fMRI), is a formidable task in neuroscience. Existing approaches for fMRI analysis utilize hand-crafted features, but the process of feature extraction risks losing essential information in fMRI scans. To address this challenge, we present SwiFT (Swin 4D fMRI Transformer), a Swin Transformer architecture that can learn brain dynamics directly from fMRI volumes in a memory and computation-efficient manner. SwiFT achieves this by implementing a 4D window multi-head self-attention mechanism and absolute positional embeddings. We evaluate SwiFT using multiple large-scale resting-state fMRI datasets, including the Human Connectome Project (HCP), Adolescent Brain Cognitive Development (ABCD), and UK Biobank (UKB) datasets, to predict sex, age, and cognitive intelligence. Our experimental outcomes reveal that SwiFT consistently outperforms recent state-of-the-art models. Furthermore, by leveraging its end-to-end learning capability, we show that contrastive loss-based self-supervised pre-training of SwiFT can enhance performance on downstream tasks. Additionally, we employ an explainable AI method to identify the brain regions associated with sex classification. To our knowledge, SwiFT is the first Swin Transformer architecture to process dimensional spatiotemporal brain functional data in an end-to-end fashion. Our work holds substantial potential in facilitating scalable learning of functional brain imaging in neuroscience research by reducing the hurdles associated with applying Transformer models to high-dimensional fMRI. Speaker: Junbeom Kwon is a research associate working in Prof. Jiook Cha’s lab at Seoul National University. Paper link: https://arxiv.org/abs/2307.05916

Speaker

Junbeom Kwon

Scheduled for

Nov 20, 2023, 8:30 AM

Timezone

EDT

Seminar
GMT

State-of-the-Art Spike Sorting with SpikeInterface

This webinar will focus on spike sorting analysis with SpikeInterface, an open-source framework for the analysis of extracellular electrophysiology data. After a brief introduction of the project (~30 mins) highlighting the basics of the SpikeInterface software and advanced features (e.g., data compression, quality metrics, drift correction, cloud visualization), we will have an extensive hands-on tutorial (~90 mins) showing how to use SpikeInterface in a real-world scenario. After attending the webinar, you will: (1) have a global overview of the different steps involved in a processing pipeline; (2) know how to write a complete analysis pipeline with SpikeInterface.

Speaker

Samuel Garcia and Alessio Buccino • CRNS, Lyon, France and Allen Institute for Neural Dynamics, Seattle, USA

Scheduled for

Nov 6, 2023, 3:00 PM

Timezone

GMT

Seminar
GMT+1

Enhancing Qualitative Coding with Large Language Models: Potential and Challenges

Qualitative coding is the process of categorizing and labeling raw data to identify themes, patterns, and concepts within qualitative research. This process requires significant time, reflection, and discussion, often characterized by inherent subjectivity and uncertainty. Here, we explore the possibility to leverage large language models (LLM) to enhance the process and assist researchers with qualitative coding. LLMs, trained on extensive human-generated text, possess an architecture that renders them capable of understanding the broader context of a conversation or text. This allows them to extract patterns and meaning effectively, making them particularly useful for the accurate extraction and coding of relevant themes. In our current approach, we employed the chatGPT 3.5 Turbo API, integrating it into the qualitative coding process for data from the SWISS100 study, specifically focusing on data derived from centenarians' experiences during the Covid-19 pandemic, as well as a systematic centenarian literature review. We provide several instances illustrating how our approach can assist researchers with extracting and coding relevant themes. With data from human coders on hand, we highlight points of convergence and divergence between AI and human thematic coding in the context of these data. Moving forward, our goal is to enhance the prototype and integrate it within an LLM designed for local storage and operation (LLaMa). Our initial findings highlight the potential of AI-enhanced qualitative coding, yet they also pinpoint areas requiring attention. Based on these observations, we formulate tentative recommendations for the optimal integration of LLMs in qualitative coding research. Further evaluations using varied datasets and comparisons among different LLMs will shed more light on the question of whether and how to integrate these models into this domain.

Speaker

Kim Uittenhove & Olivier Mucchiut • AFC Lab / University of Lausanne

Scheduled for

Oct 15, 2023, 10:30 AM

Timezone

GMT+1

Seminar
GMT+1

Internet interventions targeting grief symptoms

Web-based self-help interventions for coping with prolonged grief have established their efficacy. However, few programs address recent losses and investigate the effect of self-tailoring of the content. In an international project, the text-based self-help program LIVIA was adapted and complemented with an Embodied Conversational Agent, an initial risk assessment and a monitoring tool. The new program SOLENA was evaluated in three trials in Switzerland, the Netherlands and Portugal. The aim of the trials was to evaluate the clinical efficacy for reducing grief, depression and loneliness and to examine client satisfaction and technology acceptance. The talk will present the SOLENA program and report results of the Portuguese and Dutch trial as well as preliminary results of the Swiss RCT. The ongoing Swiss trial compares a standardised to a self-tailored delivery format and analyses clinical outcomes, the helpfulness of specific content and the working alliance. Finally, lessons learned in the development and evaluation of a web-based self-help intervention for older adults will be discusses.

Speaker

Jeannette Brodbeck • Fachhochschule Nordwestschweiz / University of Bern

Scheduled for

Sep 24, 2023, 1:00 PM

Timezone

GMT+1

Seminar
EDT

NII Methods (journal club): NeuroQuery, comprehensive meta-analysis of human brain mapping

We will discuss this paper on Neuroquery, a relatively new web-based meta-analysis tool: https://elifesciences.org/articles/53385.pdf. This is different from Neurosynth in that it generates meta-analysis maps using predictive modeling from the string of text provided at the prompt, instead of performing inferential statistics to calculate the overlap of activation from different studies. This allows the user to generate predictive maps for more nuanced cognitive processes - especially for clinical populations which may be underrepresented in the literature compared to controls - and can be useful in generating predictions about where the activity will be for one's own study, and for creating ROIs.

Speaker

Andy Jahn • fMRI Lab, University of Michigan

Scheduled for

Aug 31, 2023, 9:00 AM

Timezone

EDT

Seminar
EDT

Manipulating single-unit theta phase-locking with PhaSER: An open-source tool for real-time phase estimation and manipulation

Zoe has developed an open-source tool PhaSER, which allows her to perform real-time oscillatory phase estimation and apply optogenetic manipulations at precise phases of hippocampal theta during high-density electrophysiological recordings in head-fixed mice while they navigate a virtual environment. The precise timing of single-unit spiking relative to network-wide oscillations (i.e., phase locking) has long been thought to maintain excitatory-inhibitory homeostasis and coordinate cognitive processes, but due to intense experimental demands, the causal influence of this phenomenon has never been determined. Thus, we developed PhaSER (Phase-locked Stimulation to Endogenous Rhythms), a tool which allows the user to explore the temporal relationship between single-unit spiking and ongoing oscillatory activity.

Speaker

Zoe Christenson-Wick • Mount Sinai School of Medicine, NY, USA

Scheduled for

May 8, 2023, 10:00 AM

Timezone

EDT

Seminar
GMT+1

AI for Multi-centre Epilepsy Lesion Detection on MRI

Epilepsy surgery is a safe but underutilised treatment for drug-resistant focal epilepsy. One challenge in the presurgical evaluation of patients with drug-resistant epilepsy are patients considered “MRI negative”, i.e. where a structural brain abnormality has not been identified on MRI. A major pathology in “MRI negative” patients is focal cortical dysplasia (FCD), where lesions are often small or subtle and easily missed by visual inspection. In recent years, there has been an explosion in artificial intelligence (AI) research in the field of healthcare. Automated FCD detection is an area where the application of AI may translate into significant improvements in the presurgical evaluation of patients with focal epilepsy. I will provide an overview of our automated FCD detection work, the Multicentre Epilepsy Lesion Detection (MELD) project and how AI algorithms are beginning to be integrated into epilepsy presurgical planning at Great Ormond Street Hospital and elsewhere around the world. Finally, I will discuss the challenges and future work required to bring AI to the forefront of care for patients with epilepsy.

Speaker

Sophie Adler

Scheduled for

Feb 28, 2023, 6:00 PM

Timezone

GMT+1

Seminar
GMT+1

A Better Method to Quantify Perceptual Thresholds : Parameter-free, Model-free, Adaptive procedures

The ‘quantification’ of perception is arguably both one of the most important and most difficult aspects of perception study. This is particularly true in visual perception, in which the evaluation of the perceptual threshold is a pillar of the experimental process. The choice of the correct adaptive psychometric procedure, as well as the selection of the proper parameters, is a difficult but key aspect of the experimental protocol. For instance, Bayesian methods such as QUEST, require the a priori choice of a family of functions (e.g. Gaussian), which is rarely known before the experiment, as well as the specification of multiple parameters. Importantly, the choice of an ill-fitted function or parameters will induce costly mistakes and errors in the experimental process. In this talk we discuss the existing methods and introduce a new adaptive procedure to solve this problem, named, ZOOM (Zooming Optimistic Optimization of Models), based on recent advances in optimization and statistical learning. Compared to existing approaches, ZOOM is completely parameter free and model-free, i.e. can be applied on any arbitrary psychometric problem. Moreover, ZOOM parameters are self-tuned, thus do not need to be manually chosen using heuristics (eg. step size in the Staircase method), preventing further errors. Finally, ZOOM is based on state-of-the-art optimization theory, providing strong mathematical guarantees that are missing from many of its alternatives, while being the most accurate and robust in real life conditions. In our experiments and simulations, ZOOM was found to be significantly better than its alternative, in particular for difficult psychometric functions or when the parameters when not properly chosen. ZOOM is open source, and its implementation is freely available on the web. Given these advantages and its ease of use, we argue that ZOOM can improve the process of many psychophysics experiments.

Speaker

Julien Audiffren • University of Fribourg

Scheduled for

Feb 28, 2023, 4:00 PM

Timezone

GMT+1

Seminar
GMT+1

Exploring the Potential of High-Density Data for Neuropsychological Testing with Coregraph

Coregraph is a tool under development that allows us to collect high-density data patterns during the administration of classic neuropsychological tests such as the Trail Making Test and Clock Drawing Test. These tests are widely used to evaluate cognitive function and screen for neurodegenerative disorders, but traditional methods of data collection only yield sparse information, such as test completion time or error types. By contrast, the high-density data collected with Coregraph may contribute to a better understanding of the cognitive processes involved in executing these tests. In addition, Coregraph may potentially revolutionize the field of cognitive evaluation by aiding in the prediction of cognitive deficits and in the identification of early signs of neurodegenerative disorders such as Alzheimer's dementia. By analyzing high-density graphomotor data through techniques like manual feature engineering and machine learning, we can uncover patterns and relationships that would be otherwise hidden with traditional methods of data analysis. We are currently in the process of determining the most effective methods of feature extraction and feature analysis to develop Coregraph to its full potential.

Speaker

Kim Uittenhove • University of Lausanne

Scheduled for

Feb 7, 2023, 2:00 PM

Timezone

GMT+1