Loading...

Filters
Sort by
Seminars & Colloquia

Live and recorded talks from the researchers shaping this domain.

20 items
Seminar
GMT-3

Introduction to protocols.io: Scientific collaboration through open protocols

Research articles and laboratory protocol organization often lack detailed instructions for replicating experiments. protocols.io is an open-access platform where researchers collaboratively create dynamic, interactive, step-by-step protocols that can be executed on mobile devices or the web. Researchers can easily and efficiently share protocols with colleagues, collaborators, the scientific community, or make them public. Real-time communication and interaction keep protocols up to date. Public protocols receive a DOI and enable open communication with authors and researchers to foster efficient experimentation and reproducibility.

Speaker

Lenny Teytelman • Founder & President of protocols.io

Scheduled for

Sep 24, 2025, 11:00 AM

Timezone

GMT-3

Seminar
GMT-3

Scaling Up Bioimaging with Microfluidic Chips

Explore how microfluidic chips can enhance your imaging experiments by increasing control, throughput, or flexibility. In this remote, personalized workshop, participants will receive expert guidance, support and chips to run tests on their own microscopes.

Speaker

Tobias Wenzel • Institute for Biological and Medical Engineering (IIBM), Pontificia Universidad Católica de Chile.

Scheduled for

Sep 4, 2025, 12:00 PM

Timezone

GMT-3

Seminar
GMT-3

The SIMple microscope: Development of a fibre-based platform for accessible SIM imaging in unconventional environments

Advancements in imaging speed, depth and resolution have made structured illumination microscopy (SIM) an increasingly powerful optical sectioning (OS) and super-resolution (SR) technique, but these developments remain inaccessible to many life science researchers due to the cost, optical complexity and delicacy of these instruments. We address these limitations by redesigning the optical path using in-line fibre components that are compact, lightweight and easily assembled in a “Plug & Play” modality, without compromising imaging performance. They can be integrated into an existing widefield microscope with a minimum of optical components and alignment, making OS-SIM more accessible to researchers with less optics experience. We also demonstrate a complete SR-SIM imaging system with dimensions 300 mm × 300 mm × 450 mm. We propose to enable accessible SIM imaging by utilising its compact, lightweight and robust design to transport it where it is needed, and image in “unconventional” environments where factors such as temperature and biosafety considerations currently limit imaging experiments.

Speaker

Rebecca McClelland • PhD student at the University of Cambridge, United Kingdom.

Scheduled for

Aug 25, 2025, 12:00 PM

Timezone

GMT-3

Seminar
GMT-3

Open SPM: A Modular Framework for Scanning Probe Microscopy

OpenSPM aims to democratize innovation in the field of scanning probe microscopy (SPM), which is currently dominated by a few proprietary, closed systems that limit user-driven development. Our platform includes a high-speed OpenAFM head and base optimized for small cantilevers, an OpenAFM controller, a high-voltage amplifier, and interfaces compatible with several commercial AFM systems such as the Bruker Multimode, Nanosurf DriveAFM, Witec Alpha SNOM, Zeiss FIB-SEM XB550, and Nenovision Litescope. We have created a fully documented and community-driven OpenSPM platform, with training resources and sourcing information, which has already enabled the construction of more than 15 systems outside our lab. The controller is integrated with open-source tools like Gwyddion, HDF5, and Pycroscopy. We have also engaged external companies, two of which are integrating our controller into their products or interfaces. We see growing interest in applying parts of the OpenSPM platform to related techniques such as correlated microscopy, nanoindentation, and scanning electron/confocal microscopy. To support this, we are developing more generic and modular software, alongside a structured development workflow. A key feature of the OpenSPM system is its Python-based API, which makes the platform fully scriptable and ideal for AI and machine learning applications. This enables, for instance, automatic control and optimization of PID parameters, setpoints, and experiment workflows. With a growing contributor base and industry involvement, OpenSPM is well positioned to become a global, open platform for next-generation SPM innovation.

Speaker

Marcos Penedo Garcia • Senior scientist, LBNI-IBI, EPFL Lausanne, Switzerland

Scheduled for

Jun 23, 2025, 10:00 AM

Timezone

GMT-3

Seminar
GMT-3

Open Hardware Microfluidics

What’s the point of having scientific and technological innovations when only a few can benefit from them? How can we make science more inclusive? Those questions are always in the back of my mind when we perform research in our laboratory, and we have a strong focus on the scientific accessibility of our developed methods from microfabrication to sensor development.

Speaker

Vittorio Saggiomo • Associate Professor, Laboratory of BioNanoTechnology, Wageningen University, The Netherlands

Scheduled for

Jun 5, 2025, 10:00 AM

Timezone

GMT-3

Seminar
GMT-3

“A Focus on 3D Printed Lenses: Rapid prototyping, low-cost microscopy and enhanced imaging for the life sciences”

High-quality glass lenses are commonplace in the design of optical instrumentation used across the biosciences. However, research-grade glass lenses are often costly, delicate and, depending on the prescription, can involve intricate and lengthy manufacturing - even more so in bioimaging applications. This seminar will outline 3D printing as a viable low-cost alternative for the manufacture of high-performance optical elements, where I will also discuss the creation of the world’s first fully 3D printed microscope and other implementations of 3D printed lenses. Our 3D printed lenses were generated using consumer-grade 3D printers and pose a 225x materials cost-saving compared to glass optics. Moreover, they can be produced in any lab or home environment and offer great potential for education and outreach. Following performance validation, our 3D printed optics were implemented in the production of a fully 3D printed microscope and demonstrated in histological imaging applications. We also applied low-cost fabrication methods to exotic lens geometries to enhance resolution and contrast across spatial scales and reveal new biological structures. Across these applications, our findings showed that 3D printed lenses are a viable substitute for commercial glass lenses, with the advantage of being relatively low-cost, accessible, and suitable for use in optical instruments. Combining 3D printed lenses with open-source 3D printed microscope chassis designs opens the doors for low-cost applications for rapid prototyping, low-resource field diagnostics, and the creation of cheap educational tools.

Speaker

Liam Rooney • University of Glasgow

Scheduled for

May 21, 2025, 10:00 AM

Timezone

GMT-3

Seminar
GMT-3

Resonancia Magnética y Detección Remota: No se Necesita Estar tan Cerca”

La resonancia magnética nuclear está basada en el fenómeno del magnetismo nuclear que más aplicaciones ha encontrado para el estudio de enfermedades humanas. Usualmente la señal de RM es recibida y transmitida a distancias cercanas al objeto del que se quiere obtener una imagen. Otra alternativa es emitir y recibir la misma señal de manera remota haciendo uso de guías de onda. Este enfoque tiene la ventaja que se puede aplicar a altos campos magnéticos, la absorción de energía es menor, además es posible cubrir mayores regiones de interés y comodidad para el paciente. Por otro lado, sufre de baja calidad de imagen en algunos casos. En esta ocasión hablaremos de nuestra experiencia haciendo uso de este enfoque empleando una guía de ondas abierta y metamateriales tanto para sistemas clínicos como preclínicos de IRM.

Speaker

Alfredo Rodriguez • Universidad Autonoma Metropolitana Itzapalapa

Scheduled for

Mar 26, 2025, 10:00 AM

Timezone

GMT-3

Seminar
EDT

Towards open meta-research in neuroimaging

When meta-research (research on research) makes an observation or points out a problem (such as a flaw in methodology), the project should be repeated later to determine whether the problem remains. For this we need meta-research that is reproducible and updatable, or living meta-research. In this talk, we introduce the concept of living meta-research, examine prequels to this idea, and point towards standards and technologies that could assist researchers in doing living meta-research. We introduce technologies like natural language processing, which can help with automation of meta-research, which in turn will make the research easier to reproduce/update. Further, we showcase our open-source litmining ecosystem, which includes pubget (for downloading full-text journal articles), labelbuddy (for manually extracting information), and pubextract (for automatically extracting information). With these tools, you can simplify the tedious data collection and information extraction steps in meta-research, and then focus on analyzing the text. We will then describe some living meta-research projects to illustrate the use of these tools. For example, we’ll show how we used GPT along with our tools to extract information about study participants. Essentially, this talk will introduce you to the concept of meta-research, some tools for doing meta-research, and some examples. Particularly, we want you to take away the fact that there are many interesting open questions in meta-research, and you can easily learn the tools to answer them. Check out our tools at https://litmining.github.io/

Speaker

Kendra Oudyk • ORIGAMI - Neural data science - https://neurodatascience.github.io/

Scheduled for

Dec 8, 2024, 11:00 AM

Timezone

EDT

Seminar
GMT-3

“Open Raman Microscopy (ORM): A modular Raman spectroscopy setup with an open-source controller”

Raman spectroscopy is a powerful technique for identifying chemical species by probing their vibrational energy levels, offering exceptional specificity with a relatively simple setup involving a laser source, spectrometer, and microscope/probe. However, the high cost of Raman systems lacking modularity often limits exploratory research hindering broader adoption. To address the need for an affordable, modular microscopy platform for multimodal imaging, we present a customizable confocal Raman spectroscopy setup alongside an open-source acquisition software, ORM (Open Raman Microscopy) Controller, developed in Python. This solution bridges the gap between expensive commercial systems and complex, custom-built setups used by specialist research groups. In this presentation, we will cover the components of the setup, the design rationale, assembly methods, limitations, and its modular potential for expanding functionality. Additionally, we will demonstrate ORM’s capabilities for instrument control, 2D and 3D Raman mapping, region-of-interest selection, and its adaptability to various instrument configurations. We will conclude by showcasing practical applications of this setup across different research fields.

Speaker

Kevin Uning • London Centre for Nanotechnology (LCN), University College London (UCL)

Scheduled for

Nov 28, 2024, 12:00 PM

Timezone

GMT-3

Seminar
GMT-3

Sometimes more is not better: The case of medical imaging

En el diagnóstico médico por imágenes muchas veces los desarrollos técnicos se han concentrado en mejorar la calidad de las imágenes en términos de resolución espacial y/o temporal, lo cual muchas veces ha incrementado considerablemente los costos de estas prestaciones. Sin embargo, mejor resolución espacial y/o temporal de las imágenes médicas, no se traducen necesariamente en mejores diagnósticos o en diagnósticos más tempranos, y en algunos casos, nuevas capacidades diagnósticas no han demostrado un impacto en reducir la mortalidad asociada a las patologías. En esta presentación discutiremos como el impacto de las nuevas tecnologías en salud debe ser medido en términos del resultado clínico del paciente o la población afectada más que en parámetros asociados a la "calidad" de las imágenes.

Speaker

Marcelo Andia • Profesor Asosiado

Scheduled for

Nov 19, 2024, 1:00 PM

Timezone

GMT-3

Seminar
GMT

Trackoscope: A low-cost, open, autonomous tracking microscope for long-term observations of microscale organisms

Cells and microorganisms are motile, yet the stationary nature of conventional microscopes impedes comprehensive, long-term behavioral and biomechanical analysis. The limitations are twofold: a narrow focus permits high-resolution imaging but sacrifices the broader context of organism behavior, while a wider focus compromises microscopic detail. This trade-off is especially problematic when investigating rapidly motile ciliates, which often have to be confined to small volumes between coverslips affecting their natural behavior. To address this challenge, we introduce Trackoscope, an 2-axis autonomous tracking microscope designed to follow swimming organisms ranging from 10μm to 2mm across a 325 square centimeter area for extended durations—ranging from hours to days—at high resolution. Utilizing Trackoscope, we captured a diverse array of behaviors, from the air-water swimming locomotion of Amoeba to bacterial hunting dynamics in Actinosphaerium, walking gait in Tardigrada, and binary fission in motile Blepharisma. Trackoscope is a cost-effective solution well-suited for diverse settings, from high school labs to resource-constrained research environments. Its capability to capture diverse behaviors in larger, more realistic ecosystems extends our understanding of the physics of living systems. The low-cost, open architecture democratizes scientific discovery, offering a dynamic window into the lives of previously inaccessible small aquatic organisms.

Speaker

Priya Soneji • Georgia Institute of Technology

Scheduled for

Oct 7, 2024, 5:00 PM

Timezone

GMT

Seminar
GMT-3

Optogenetic control of Nodal signaling patterns

Embryos issue instructions to their cells in the form of patterns of signaling activity. Within these patterns, the distribution of signaling in time and space directs the fate of embryonic cells. Tools to perturb developmental signaling with high resolution in space and time can help reveal how these patterns are decoded to make appropriate fate decisions. In this talk, I will present new optogenetic reagents and an experimental pipeline for creating designer Nodal signaling patterns in live zebrafish embryos. Our improved optoNodal reagents eliminate dark activity and improve response kinetics, without sacrificing dynamic range. We adapted an ultra-widefield microscopy platform for parallel light patterning in up to 36 embryos and demonstrated precise spatial control over Nodal signaling activity and downstream gene expression. Using this system, we demonstrate that patterned Nodal activation can initiate specification and internalization movements of endodermal precursors. Further, we used patterned illumination to generate synthetic signaling patterns in Nodal signaling mutants, rescuing several characteristic developmental defects. This study establishes an experimental toolkit for systematic exploration of Nodal signaling patterns in live embryos.

Speaker

Nathan Lord • Assistant Professor, Department of Computational and Systems Biology

Scheduled for

Sep 19, 2024, 12:00 PM

Timezone

GMT-3

Seminar
GMT-3

A Breakdown of the Global Open Science Hardware (GOSH) Movement

This seminar, hosted by the LIBRE hub project, will provide an in-depth introduction to the Global Open Science Hardware (GOSH) movement. Since its inception, GOSH has been instrumental in advancing open-source hardware within scientific research, fostering a diverse and active community. The seminar will cover the history of GOSH, its current initiatives, and future opportunities, with a particular focus on the contributions and activities of the Latin American branch. This session aims to inform researchers, educators, and policy-makers about the significance and impact of GOSH in promoting accessibility and collaboration in science instrumentation.

Speaker

Tobias Wenzel, PhD. Assistant Professor at Pontificia Universidad Católica de Chile. Brianna Johns, BSc. Community Coordinator GOSH. Pablo Cremades, PhD. Coordinator of the Mendoza node of the reGOSH network.

Scheduled for

Jul 16, 2024, 10:00 AM

Timezone

GMT-3

Seminar
GMT-3

Open source FPGA tools for building research devices

Edmund will present why to use FPGAs when building scientific instruments, when and why to use open source FPGA tools, the history of their development, their development status, currently supported FPGA families and functions, current developments in design languages and tools, the community, freely available design blocks, and possible future developments.

Speaker

Edmund Humenberger • CEO @ Symbiotic EDA

Scheduled for

Jun 24, 2024, 10:00 AM

Timezone

GMT-3

Seminar
GMT-3

Toward globally accessible neuroimaging: Building the OSI2ONE MRI Scanner in Paraguay

The Open Source Imaging Initiative has recently released a fully open source low field MRI scanner called the OSI2ONE. We are currently building this system at the Universidad Paraguayo Alemana in Asuncion, Paraguay for a neuroimaging project at a clinic in Bolivia. I will discuss the process of construction, important considerations before you build, and future work planned with this device.

Speaker

Joshua Harper • Professor of Engineering

Scheduled for

Jun 17, 2024, 1:30 PM

Timezone

GMT-3

Seminar
EDT

Trends in NeuroAI - Brain-like topography in transformers (Topoformer)

Dr. Nicholas Blauch will present on his work "Topoformer: Brain-like topographic organization in transformer language models through spatial querying and reweighting". Dr. Blauch is a postdoctoral fellow in the Harvard Vision Lab advised by Talia Konkle and George Alvarez. Paper link: https://openreview.net/pdf?id=3pLMzgoZSA Trends in NeuroAI is a reading group hosted by the MedARC Neuroimaging & AI lab (https://medarc.ai/fmri | https://groups.google.com/g/medarc-fmri).

Speaker

Nicholas Blauch

Scheduled for

Jun 6, 2024, 2:00 PM

Timezone

EDT

Seminar
GMT-3

A modular, free and open source graphical interface for visualizing and processing electrophysiological signals in real-time

Portable biosensors become more popular every year. In this context, I propose NeuriGUI, a modular and cross-platform graphical interface that connects to those biosensors for real-time processing, exploring and storing of electrophysiological signals. The NeuriGUI acts as a common entry point in brain-computer interfaces, making it possible to plug in downstream third-party applications for real-time analysis of the incoming signal. NeuriGUI is 100% free and open source.

Speaker

David Baum • Research Engineer at InteraXon

Scheduled for

May 27, 2024, 12:00 PM

Timezone

GMT-3

Seminar
EDT

Trends in NeuroAI - Unified Scalable Neural Decoding (POYO)

Lead author Mehdi Azabou will present on his work "POYO-1: A Unified, Scalable Framework for Neural Population Decoding" (https://poyo-brain.github.io/). Mehdi is an ML PhD student at Georgia Tech advised by Dr. Eva Dyer. Paper link: https://arxiv.org/abs/2310.16046 Trends in NeuroAI is a reading group hosted by the MedARC Neuroimaging & AI lab (https://medarc.ai/fmri | https://groups.google.com/g/medarc-fmri).

Speaker

Mehdi Azabou

Scheduled for

Feb 21, 2024, 11:00 AM

Timezone

EDT

Seminar
EDT

Trends in NeuroAI - Meta's MEG-to-image reconstruction

Trends in NeuroAI is a reading group hosted by the MedARC Neuroimaging & AI lab (https://medarc.ai/fmri). Title: Brain-optimized inference improves reconstructions of fMRI brain activity Abstract: The release of large datasets and developments in AI have led to dramatic improvements in decoding methods that reconstruct seen images from human brain activity. We evaluate the prospect of further improving recent decoding methods by optimizing for consistency between reconstructions and brain activity during inference. We sample seed reconstructions from a base decoding method, then iteratively refine these reconstructions using a brain-optimized encoding model that maps images to brain activity. At each iteration, we sample a small library of images from an image distribution (a diffusion model) conditioned on a seed reconstruction from the previous iteration. We select those that best approximate the measured brain activity when passed through our encoding model, and use these images for structural guidance during the generation of the small library in the next iteration. We reduce the stochasticity of the image distribution at each iteration, and stop when a criterion on the "width" of the image distribution is met. We show that when this process is applied to recent decoding methods, it outperforms the base decoding method as measured by human raters, a variety of image feature metrics, and alignment to brain activity. These results demonstrate that reconstruction quality can be significantly improved by explicitly aligning decoding distributions to brain activity distributions, even when the seed reconstruction is output from a state-of-the-art decoding algorithm. Interestingly, the rate of refinement varies systematically across visual cortex, with earlier visual areas generally converging more slowly and preferring narrower image distributions, relative to higher-level brain areas. Brain-optimized inference thus offers a succinct and novel method for improving reconstructions and exploring the diversity of representations across visual brain areas. Speaker: Reese Kneeland is a Ph.D. student at the University of Minnesota working in the Naselaris lab. Paper link: https://arxiv.org/abs/2312.07705

Speaker

Reese Kneeland

Scheduled for

Jan 4, 2024, 11:00 AM

Timezone

EDT

Seminar
EDT

Trends in NeuroAI - Meta's MEG-to-image reconstruction

Trends in NeuroAI is a reading group hosted by the MedARC Neuroimaging & AI lab (https://medarc.ai/fmri). This will be an informal journal club presentation, we do not have an author of the paper joining us. Title: Brain decoding: toward real-time reconstruction of visual perception Abstract: In the past five years, the use of generative and foundational AI systems has greatly improved the decoding of brain activity. Visual perception, in particular, can now be decoded from functional Magnetic Resonance Imaging (fMRI) with remarkable fidelity. This neuroimaging technique, however, suffers from a limited temporal resolution (≈0.5 Hz) and thus fundamentally constrains its real-time usage. Here, we propose an alternative approach based on magnetoencephalography (MEG), a neuroimaging device capable of measuring brain activity with high temporal resolution (≈5,000 Hz). For this, we develop an MEG decoding model trained with both contrastive and regression objectives and consisting of three modules: i) pretrained embeddings obtained from the image, ii) an MEG module trained end-to-end and iii) a pretrained image generator. Our results are threefold: Firstly, our MEG decoder shows a 7X improvement of image-retrieval over classic linear decoders. Second, late brain responses to images are best decoded with DINOv2, a recent foundational image model. Third, image retrievals and generations both suggest that MEG signals primarily contain high-level visual features, whereas the same approach applied to 7T fMRI also recovers low-level features. Overall, these results provide an important step towards the decoding - in real time - of the visual processes continuously unfolding within the human brain. Speaker: Dr. Paul Scotti (Stability AI, MedARC) Paper link: https://arxiv.org/abs/2310.19812

Speaker

Paul Scotti

Scheduled for

Dec 6, 2023, 11:00 AM

Timezone

EDT