Events
Open full Events browserLoading...
Live and recorded talks from the researchers shaping this domain.
“Development and application of gaze control models for active perception”
Gaze shifts in humans serve to direct high-resolution vision provided by the fovea towards areas in the environment. Gaze can be considered a proxy for attention or indicator of the relative importance of different parts of the environment. In this talk, we discuss the development of generative models of human gaze in response to visual input. We discuss how such models can be learned, both using supervised learning and using implicit feedback as an agent interacts with the environment, the latter being more plausible in biological agents. We also discuss two ways such models can be used. First, they can be used to improve the performance of artificial autonomous systems, in applications such as autonomous navigation. Second, because these models are contingent on the human’s task, goals, and/or state in the context of the environment, observations of gaze can be used to infer information about user intent. This information can be used to improve human-machine and human robot interaction, by making interfaces more anticipative. We discuss example applications in gaze-typing, robotic tele-operation and human-robot interaction.
Speaker
Prof. Bert Shi • Professor of Electronic and Computer Engineering at the Hong Kong University of Science and Technology (HKUST)
Scheduled for
Jun 11, 2025, 2:00 PM
Timezone
GMT+1
Structural & Functional Neuroplasticity in Children with Hemiplegia
About 30% of children with cerebral palsy have congenital hemiplegia, resulting from periventricular white matter injury, which impairs the use of one hand and disrupts bimanual co-ordination. Congenital hemiplegia has a profound effect on each child's life and, thus, is of great importance to the public health. Changes in brain organization (neuroplasticity) often occur following periventricular white matter injury. These changes vary widely depending on the timing, location, and extent of the injury, as well as the functional system involved. Currently, we have limited knowledge of neuroplasticity in children with congenital hemiplegia. As a result, we provide rehabilitation treatment to these children almost blindly based exclusively on behavioral data. In this talk, I will present recent research evidence of my team on understanding neuroplasticity in children with congenital hemiplegia by using a multimodal neuroimaging approach that combines data from structural and functional neuroimaging methods. I will further present preliminary data regarding functional improvements of upper extremities motor and sensory functions as a result of rehabilitation with a robotic system that involves active participation of the child in a video-game setup. Our research is essential for the development of novel or improved neurological rehabilitation strategies for children with congenital hemiplegia.
Speaker
Christos Papadelis • University of Texas at Arlington
Scheduled for
Feb 20, 2025, 12:00 PM
Timezone
EST
Applied cognitive neuroscience to improve learning and therapeutics
Advancements in cognitive neuroscience have provided profound insights into the workings of the human brain and the methods used offer opportunities to enhance performance, cognition, and mental health. Drawing upon interdisciplinary collaborations in the University of California San Diego, Human Performance Optimization Lab, this talk explores the application of cognitive neuroscience principles in three domains to improve human performance and alleviate mental health challenges. The first section will discuss studies addressing the role of vision and oculomotor function in athletic performance and the potential to train these foundational abilities to improve performance and sports outcomes. The second domain considers the use of electrophysiological measurements of the brain and heart to detect, and possibly predict, errors in manual performance, as shown in a series of studies with surgeons as they perform robot-assisted surgery. Lastly, findings from clinical trials testing personalized interventional treatments for mood disorders will be discussed in which the temporal and spatial parameters of transcranial magnetic stimulation (TMS) are individualized to test if personalization improves treatment response and can be used as predictive biomarkers to guide treatment selection. Together, these translational studies use the measurement tools and constructs of cognitive neuroscience to improve human performance and well-being.
Speaker
Greg Applebaum • Department of Psychiatry, University of California, San Diego
Scheduled for
May 15, 2024, 5:00 PM
Timezone
GMT+1
Why robots? A brief introduction to the use of robots in psychological research
Why should psychologists be interested in robots? This talk aims to illustrate how social robots – machines with human-like features and behaviors – can offer interesting insights into the human mind. I will first provide a brief overview of how robots have been used in psychology and cognitive science research focusing on two approaches - Developmental Robotics and Human-Robot Interaction (HRI). We will then delve into recent works in HRI, including my own, in greater detail. We will also address the limitations of research thus far, such as the lack of proper controlled experiments, and discuss how the scientific community should evaluate the use of technology in educational and other social settings.
Speaker
Junko Kanero • Sabanci University
Scheduled for
Jun 4, 2023, 6:00 PM
Timezone
GMT+3
Analogical Reasoning and Generalization for Interactive Task Learning in Physical Machines
Humans are natural teachers; learning through instruction is one of the most fundamental ways that we learn. Interactive Task Learning (ITL) is an emerging research agenda that studies the design of complex intelligent robots that can acquire new knowledge through natural human teacher-robot learner interactions. ITL methods are particularly useful for designing intelligent robots whose behavior can be adapted by humans collaborating with them. In this talk, I will summarize our recent findings on the structure that human instruction naturally has and motivate an intelligent system design that can exploit their structure. The system – AILEEN – is being developed using the common model of cognition. Architectures that implement the Common Model of Cognition - Soar, ACT-R, and Sigma - have a prominent place in research on cognitive modeling as well as on designing complex intelligent agents. However, they miss a critical piece of intelligent behavior – analogical reasoning and generalization. I will introduce a new memory – concept memory – that integrates with a common model of cognition architecture and supports ITL.
Speaker
Shiwali Mohan • Palo Alto Research Center
Scheduled for
Mar 29, 2023, 8:00 PM
Timezone
CST
Children-Agent Interaction For Assessment and Rehabilitation: From Linguistic Skills To Mental Well-being
Socially Assistive Robots (SARs) have shown great potential to help children in therapeutic and healthcare contexts. SARs have been used for companionship, learning enhancement, social and communication skills rehabilitation for children with special needs (e.g., autism), and mood improvement. Robots can be used as novel tools to assess and rehabilitate children’s communication skills and mental well-being by providing affordable and accessible therapeutic and mental health services. In this talk, I will present the various studies I have conducted during my PhD and at the Cambridge Affective Intelligence and Robotics Lab to explore how robots can help assess and rehabilitate children’s communication skills and mental well-being. More specifically, I will provide both quantitative and qualitative results and findings from (i) an exploratory study with children with autism and global developmental disorders to investigate the use of intelligent personal assistants in therapy; (ii) an empirical study involving children with and without language disorders interacting with a physical robot, a virtual agent, and a human counterpart to assess their linguistic skills; (iii) an 8-week longitudinal study involving children with autism and language disorders who interacted either with a physical or a virtual robot to rehabilitate their linguistic skills; and (iv) an empirical study to aid the assessment of mental well-being in children. These findings can inform and help the child-robot interaction community design and develop new adaptive robots to help assess and rehabilitate linguistic skills and mental well-being in children.
Speaker
Micole Spitale • Department of Computer Science and Technology, University of Cambridge
Scheduled for
Feb 6, 2023, 4:00 PM
Timezone
GMT
Experimental Neuroscience Bootcamp
This course provides a fundamental foundation in the modern techniques of experimental neuroscience. It introduces the essentials of sensors, motor control, microcontrollers, programming, data analysis, and machine learning by guiding students through the “hands on” construction of an increasingly capable robot. In parallel, related concepts in neuroscience are introduced as nature’s solution to the challenges students encounter while designing and building their own intelligent system.
Speaker
Adam Kampff • Voight Kampff, London, UK
Scheduled for
Dec 4, 2022, 2:00 PM
Timezone
GMT+1
Lifelong Learning AI via neuro inspired solutions
AI embedded in real systems, such as in satellites, robots and other autonomous devices, must make fast, safe decisions even when the environment changes, or under limitations on the available power; to do so, such systems must be adaptive in real time. To date, edge computing has no real adaptivity – rather the AI must be trained in advance, typically on a large dataset with much computational power needed; once fielded, the AI is frozen: It is unable to use its experience to operate if environment proves outside its training or to improve its expertise; and worse, since datasets cannot cover all possible real-world situations, systems with such frozen intelligent control are likely to fail. Lifelong Learning is the cutting edge of artificial intelligence - encompassing computational methods that allow systems to learn in runtime and incorporate learning for application in new, unanticipated situations. Until recently, this sort of computation has been found exclusively in nature; thus, Lifelong Learning looks to nature, and in particular neuroscience, for its underlying principles and mechanisms and then translates them to this new technology. Our presentation will introduce a number of state-of-the-art approaches to achieve AI adaptive learning, including from the DARPA’s L2M program and subsequent developments. Many environments are affected by temporal changes, such as the time of day, week, season, etc. A way to create adaptive systems which are both small and robust is by making them aware of time and able to comprehend temporal patterns in the environment. We will describe our current research in temporal AI, while also considering power constraints.
Speaker
Hava Siegelmann • University of Massachusetts Amherst
Scheduled for
Oct 26, 2022, 3:00 PM
Timezone
GMT
From Machine Learning to Autonomous Intelligence
How could machines learn as efficiently as humans and animals? How could machines learn to reason and plan? How could machines learn representations of percepts and action plans at multiple levels of abstraction, enabling them to reason, predict, and plan at multiple time horizons? I will propose a possible path towards autonomous intelligent agents, based on a new modular cognitive architecture and a somewhat new self-supervised training paradigm. The centerpiece of the proposed architecture is a configurable predictive world model that allows the agent to plan. Behavior and learning are driven by a set of differentiable intrinsic cost functions. The world model uses a new type of energy-based model architecture called H-JEPA (Hierarchical Joint Embedding Predictive Architecture). H-JEPA learns hierarchical abstract representations of the world that are simultaneously maximally informative and maximally predictable. The corresponding working paper is available here:https://openreview.net/forum?id=BZ5a1r-kVsf
Speaker
Yann LeCun • Meta Fair
Scheduled for
Oct 9, 2022, 3:00 PM
Timezone
GMT+1
Faking emotions and a therapeutic role for robots and chatbots: Ethics of using AI in psychotherapy
In recent years, there has been a proliferation of social robots and chatbots that are designed so that users make an emotional attachment with them. This talk will start by presenting the first such chatbot, a program called Eliza designed by Joseph Weizenbaum in the mid 1960s. Then we will look at some recent robots and chatbots with Eliza-like interfaces and examine their benefits as well as various ethical issues raised by deploying such systems.
Speaker
Bipin Indurkhya • Cognitive Science Department, Jagiellonian University, Kraków
Scheduled for
May 18, 2022, 4:00 PM
Timezone
GMT
Interdisciplinary College
The Interdisciplinary College is an annual spring school which offers a dense state-of-the-art course program in neurobiology, neural computation, cognitive science/psychology, artificial intelligence, machine learning, robotics and philosophy. It is aimed at students, postgraduates and researchers from academia and industry. This year's focus theme "Flexibility" covers (but not be limited to) the nervous system, the mind, communication, and AI & robotics. All this will be packed into a rich, interdisciplinary program of single- and multi-lecture courses, and less traditional formats.
Speaker
Tarek Besold, Suzanne Dikker, Astrid Prinz, Fynn-Mathis Trautwein, Niklas Keller, Ida Momennejad, Georg von Wichert
Scheduled for
Mar 6, 2022, 4:00 PM
Timezone
GMT+1
Deception, ExoNETs, SmushWare & Organic Data: Tech-facilitated neurorehabilitation & human-machine training
Making use of visual display technology and human-robotic interfaces, many researchers have illustrated various opportunities to distort visual and physical realities. We have had success with interventions such as error augmentation, sensory crossover, and negative viscosity. Judicial application of these techniques leads to training situations that enhance the learning process and can restore movement ability after neural injury. I will trace out clinical studies that have employed such technologies to improve the health and function, as well as share some leading-edge insights that include deceiving the patient, moving the "smarts" of software into the hardware, and examining clinical effectiveness
Speaker
James Patton • University of Illinois at Chicago, Shirley Ryan Ability Lab
Scheduled for
Feb 21, 2022, 4:00 PM
Timezone
GMT
Why would we need Cognitive Science to develop better Collaborative Robots and AI Systems?
While classical industrial robots are mostly designed for repetitive tasks, assistive robots will be challenged by a variety of different tasks in close contact with humans. Hereby, learning through the direct interaction with humans provides a potentially powerful tool for an assistive robot to acquire new skills and to incorporate prior human knowledge during the exploration of novel tasks. Moreover, an intuitive interactive teaching process may allow non-programming experts to contribute to robotic skill learning and may help to increase acceptance of robotic systems in shared workspaces and everyday life. In this talk, I will discuss recent research I did on interactive robot skill learning and the remaining challenges on the route to human-centered teaching of assistive robots. In particular, I will also discuss potential connections and overlap with cognitive science. The presented work covers learning a library of probabilistic movement primitives from human demonstrations, intention aware adaptation of learned skills in shared workspaces, and multi-channel interactive reinforcement learning for sequential tasks.
Speaker
Dorothea Koert • Technical Universtiy Darmstadt
Scheduled for
Dec 14, 2021, 2:00 PM
Timezone
GMT
NMC4 Short Talk: Brain-inspired spiking neural network controller for a neurorobotic whisker system
It is common for animals to use self-generated movements to actively sense the surrounding environment. For instance, rodents rhythmically move their whiskers to explore the space close to their body. The mouse whisker system has become a standard model to study active sensing and sensorimotor integration through feedback loops. In this work, we developed a bioinspired spiking neural network model of the sensorimotor peripheral whisker system, modelling trigeminal ganglion, trigeminal nuclei, facial nuclei, and central pattern generator neuronal populations. This network was embedded in a virtual mouse robot, exploiting the Neurorobotics Platform, a simulation platform offering a virtual environment to develop and test robots driven by brain-inspired controllers. Eventually, the peripheral whisker system was properly connected to an adaptive cerebellar network controller. The whole system was able to drive active whisking with learning capability, matching neural correlates of behaviour experimentally recorded in mice.
Speaker
Alberto Antonietti • University of Pavia
Scheduled for
Dec 1, 2021, 11:30 AM
Timezone
EST
NMC4 Short Talk: What can deep reinforcement learning tell us about human motor learning and vice-versa ?
In the deep reinforcement learning (RL) community, motor control problems are usually approached from a reward-based learning perspective. However, humans are often believed to learn motor control through directed error-based learning. Within this learning setting, the control system is assumed to have access to exact error signals and their gradients with respect to the control signal. This is unlike reward-based learning, in which errors are assumed to be unsigned, encoding relative successes and failures. Here, we try to understand the relation between these two approaches, reward- and error- based learning, and ballistic arm reaches. To do so, we test canonical (deep) RL algorithms on a well-known sensorimotor perturbation in neuroscience: mirror-reversal of visual feedback during arm reaching. This test leads us to propose a potentially novel RL algorithm, denoted as model-based deterministic policy gradient (MB-DPG). This RL algorithm draws inspiration from error-based learning to qualitatively reproduce human reaching performance under mirror-reversal. Next, we show MB-DPG outperforms the other canonical (deep) RL algorithms on a single- and a multi- target ballistic reaching task, based on a biomechanical model of the human arm. Finally, we propose MB-DPG may provide an efficient computational framework to help explain error-based learning in neuroscience.
Speaker
Michele Garibbo • University of Bristol
Scheduled for
Nov 30, 2021, 3:30 PM
Timezone
EST
Advancing Brain-Computer Interfaces by adopting a neural population approach
Brain-computer interfaces (BCIs) have afforded paralysed users “mental control” of computer cursors and robots, and even of electrical stimulators that reanimate their own limbs. Most existing BCIs map the activity of hundreds of motor cortical neurons recorded with implanted electrodes into control signals to drive these devices. Despite these impressive advances, the field is facing a number of challenges that need to be overcome in order for BCIs to become widely used during daily living. In this talk, I will focus on two such challenges: 1) having BCIs that allow performing a broad range of actions; and 2) having BCIs whose performance is robust over long time periods. I will present recent studies from our group in which we apply neuroscientific findings to address both issues. This research is based on an emerging view about how the brain works. Our proposal is that brain function is not based on the independent modulation of the activity of single neurons, but rather on specific population-wide activity patters —which mathematically define a “neural manifold”. I will provide evidence in favour of such a neural manifold view of brain function, and illustrate how advances in systems neuroscience may be critical for the clinical success of BCIs.
Speaker
Juan Alvaro Gallego • Imperial College London
Scheduled for
Nov 29, 2021, 2:00 PM
Timezone
GMT
Playing StarCraft and saving the world using multi-agent reinforcement learning!
This is my C-14 Impaler gauss rifle! There are many like it, but this one is mine!" - A terran marine If you have never heard of a terran marine before, then you have probably missed out on playing the very engaging and entertaining strategy computer game, StarCraft. However, don’t despair, because what we have in store might be even more exciting! In this interactive session, we will take you through, step-by-step, on how to train a team of terran marines to defeat a team of marines controlled by the built-in game AI in StarCraft II. How will we achieve this? Using multi-agent reinforcement learning (MARL). MARL is a useful framework for building distributed intelligent systems. In MARL, multiple agents are trained to act as individual decision-makers of some larger system, while learning to work as a team. We will show you how to use Mava (https://github.com/instadeepai/Mava), a newly released research framework for MARL to build a multi-agent learning system for StarCraft II. We will provide the necessary guidance, tools and background to understand the key concepts behind MARL, how to use Mava building blocks to build systems and how to train a system from scratch. We will conclude the session by briefly sharing various exciting real-world application areas for MARL at InstaDeep, such as large-scale autonomous train navigation and circuit board routing. These are problems that become exponentially more difficult to solve as they scale. Finally, we will argue that many of humanity’s most important practical problems are reminiscent of the ones just described. These include, for example, the need for sustainable management of distributed resources under the pressures of climate change, or efficient inventory control and supply routing in critical distribution networks, or robotic teams for rescue missions and exploration. We believe MARL has enormous potential to be applied in these areas and we hope to inspire you to get excited and interested in MARL and perhaps one day contribute to the field!
Speaker
InstaDeep
Scheduled for
Oct 28, 2021, 2:00 PM
Timezone
GMT+1
Neuropunk revolution and its implementation via real-time neurosimulations and their integrations
In this talk I present the perspectives of the "neuropunk revolution'' technologies. One could understand the "neuropunk revolution'' as the integration of real-time neurosimulations into biological nervous/motor systems via neurostimulation or artificial robotic systems via integration with actuators. I see the added value of the real-time neurosimulations as bridge technology for the set of developed technologies: BCI, neuroprosthetics, AI, robotics to provide bio-compatible integration into biological or artificial limbs. Here I present the three types of integration of the "neuropunk revolution'' technologies as inbound, outbound and closed-loop in-outbound systems. I see the shift of the perspective of how we see now the set of technologies including AI, BCI, neuroprosthetics and robotics due to the proposed concept for example the integration of external to a body simulated part of the nervous system back into the biological nervous system or muscles.
Speaker
Maxim Talanov • B-Rain Labs LLC, ITIS KFU
Scheduled for
Oct 20, 2021, 11:00 AM
Timezone
GMT+3
Collective Construction in Natural and Artificial Swarms
Natural systems provide both puzzles to unravel and demonstrations of what's possible. The natural world is full of complex systems of dynamically interchangeable, individually unreliable components that produce effective and reliable outcomes at the group level. A complementary goal to understanding the operation of such systems is that of being able to engineer artifacts that work in a similar way. One notable type of collective behavior is collective construction, epitomized by mound-building termites, which build towering, intricate mounds through the joint activity of millions of independent and limited insects. The artificial counterpart would be swarms of robots designed to build human-relevant structures. I will discuss work on both aspects of the problem, including studies of cues that individual termite workers use to help direct their actions and coordinate colony activity, and development of robot systems that build user-specified structures despite limited information and unpredictable variability in the process. These examples illustrate principles used by the insects and show how they can be applied in systems we create.
Speaker
Justin Werfel • Harvard University
Scheduled for
Oct 7, 2021, 2:50 PM
Timezone
GMT
Swarms for people
As tiny robots become individually more sophisticated, and larger robots easier to mass produce, a breakdown of conventional disciplinary silos is enabling swarm engineering to be adopted across scales and applications, from nanomedicine to treat cancer, to cm-sized robots for large-scale environmental monitoring or intralogistics. This convergence of capabilities is facilitating the transfer of lessons learned from one scale to the other. Cm-sized robots that work in the 1000s may operate in a way similar to reaction-diffusion systems at the nanoscale, while sophisticated microrobots may have individual capabilities that allow them to achieve swarm behaviour reminiscent of larger robots with memory, computation, and communication. Although the physics of these systems are fundamentally different, much of their emergent swarm behaviours can be abstracted to their ability to move and react to their local environment. This presents an opportunity to build a unified framework for the engineering of swarms across scales that makes use of machine learning to automatically discover suitable agent designs and behaviours, digital twins to seamlessly move between the digital and physical world, and user studies to explore how to make swarms safe and trustworthy. Such a framework would push the envelope of swarm capabilities, towards making swarms for people.
Speaker
Sabine Hauert • University of Bristol
Scheduled for
Oct 7, 2021, 12:00 PM
Timezone
GMT
Faculty, staff, and research positions available across World Wide.
Listing
Project information We seek a talented and motivated new PhD student to join our team at the University of Bristol. The successful applicant will join the largest centre for multidisciplinary robotics research in the UK. This project will apply cutting-edge robotics and machine learning approaches to advance the physical intelligence of robots toward human-like bimanual object manipulation. The studentship will be based in Bristol Robotics Laboratory and the University of Bristol, co-supervised by Prof. Nathan Lepora (https://lepora.com/) and Dr Efi Psomopoulou. The role is part of a €7M Horizon Europe-funded project on pushing the limit of physical intelligence and performance of robots, particularly in bimanual manipulation. The student will have the benefit of being part of a large international collaboration of many leading European research groups in robotic manipulation, and a vibrant team of postdoctoral and PhD researchers in the Dexterous Robotics group (https://www.bristolroboticslab.com/dexterous-robotics). The project introduces a novel technological framework for enabling robots to perform complex object manipulation tasks, allowing them to efficiently manipulate highly diverse objects with various properties in terms of shape, size and physical characteristics similarly to how humans do. We particularly focus on bimanual manipulation robots that can operate in challenging, real-world, possibly human-populated environments, and we further research, develop and fuse the necessary technologies in robot perception, cognition, mechatronics and control, to allow such, human-like, efficient robotic objects manipulation towards step changes in contemporary service robots. The position will be based at the Bristol Robotics Laboratory (https://www.bristolroboticslab.com/) the largest centre for multidisciplinary robotics research in the UK. It will operate within the internationally-leading Dexterous Robotics Group at Bristol Robotics Laboratory, which is an exciting and vibrant research group with several recent lecturer appointment, 25 researchers and a range of state-of-the-art robot equipment. You will use dedicated facilities and expertise from the Robotics Lab in addition to those of the Faculties of Engineering and Science at the University of Bristol and project partners. You will be working in a team with the two supervisors, a postdoctoral Research Associate and a Research Technician (both to be advertised soon). We are intending to recruit other PhD students to the team to provide a cohesive and supportive environment for the members. Start date The post starts on November 1st 2023 and lasts for 3.5 years. Application process Please contact Dr Efi Psomopoulou (efi.psomopoulou[at]bristol.ac.uk) and Professor Nathan Lepora (n.lepora[at]bristol.ac.uk) for more details about this post and to apply. Applicants should send a short CV (2 pages max) and 1-page personal statement describing why they are interested in the post. Personal statement: Please also provide a personal statement that describes your training and experience so far, your motivation for doing a PhD, your motivations for applying to the University of Bristol, and why you think we should select you. We are keen to support applicants from minority and under-represented backgrounds (based on protected characteristics) and those who have experienced other challenges or disadvantages. We encourage you to use your personal statement to ensure we can take these factors into account. We will keep applications open until the post is filled. Funding information This is a fully-funded PhD studentship at standard UKRI rates (currently £18,022 for 2023/24 year). Home fees for UK and Irish residents will be covered. There will be additional funds from the project grant for equipment and travel that are substantially in excess of those usually available for PhD studies. NOTE: This scholarship covers tuition fees for UK and Irish citizens and EU applicants who have been resident in the UK for at least 3 years (some constraints are in place around residence for education) and have UK settlement or pre-settlement status under the EU Settlement Scheme. International students can apply but would need to cover the difference between home and overseas fees.
Location
Bristol, UK
Apply by
Jul 1, 2023
Posted on
Nov 5, 2025
Posting age
6 days ago
Deadline passed
The application window has closed. Check back soon for new opportunities.
Listing
The Institute of Robotics and Cognitive Systems at the University of Lübeck has a vacancy for an Assistant Professorship (Juniorprofessur) Tenure Track W2 for Robotics for an initial period of three years with an option to extend for a further three years. The future holder of the position should represent the field of robotics in research and teaching. Furthermore, the holder of the professorship shall establish their own working group at the Institute of Robotics and Cognitive Systems. The future holder of the position should have a very good doctorate and demonstrable scientific experience in one or more of the following research areas: Modelling, simulation, and control of robots, Robot kinematics and dynamics, Robot sensor technology, e.g., force and moment sensor technology, Robotic systems, e.g., telerobotic systems, humanoid robots, etc., Soft robotics and continuum robotics, AI and machine learning methods in robotics, Human-robot collaboration and safe autonomous robot systems, AR/VR in robotics, Applications of AI and robotics in medicine. The range of tasks also includes the acquisition of third-party funds and the assumption of project management. The applicant is expected to be scientifically involved in the research focus areas of the institute and the profile areas of the university, especially in the context of projects acquired by the institute itself (public funding, industrial cooperations, etc.). The position holder is expected to be willing to cooperate with the “Lübeck Innovation Hub for Robotic Surgery” (LIROS), the 'Center for Doctoral Studies Lübeck' and the 'Open Lab for Robotics and Imaging in Industry and Medicine' (OLRIM). In teaching, participation in the degree programme 'Robotics and Autonomous Systems' (German-language Bachelor’s, English-language Master’s) as well as the other degree programmes of the university’s STEM sections is expected.
Location
University of Lübeck, Germany
Apply by
Oct 15, 2024
Posted on
Nov 5, 2025
Posting age
6 days ago
Deadline passed
The application window has closed. Check back soon for new opportunities.
Listing
Texas Robotics at the University of Texas at Austin invites applications for tenure-track faculty positions. Outstanding candidates in all areas of Robotics will be considered. Tenure-track positions require a Ph.D. or equivalent degree in a relevant area at the time of employment. Successful candidates are expected to pursue an active research program, to teach both graduate and undergraduate courses, and to supervise students in research. The University is fully committed to building a world-class faculty and we welcome candidates who resonate with our core values of learning, discovery, freedom, leadership, individual opportunity, and responsibility. Candidates who are committed to broadening participation in robotics, at all levels, are strongly encouraged.
Location
University of Texas at Austin
Apply by
Sep 26, 2025
Posted on
Nov 5, 2025
Posting age
6 days ago
Deadline passed
The application window has closed. Check back soon for new opportunities.
Listing
At RobotiXX lab, we perform robotics research at the intersection of motion planning and machine learning, with a specific focus on deployable field robotics. The selected candidates will conduct independent research to develop highly capable and intelligent mobile robots that are robustly deployable in the real world with minimal human supervision, and publish papers in top-tier robotics conferences and journals.
Location
Washington, D.C. area
Apply by
Sep 26, 2025
Posted on
Nov 5, 2025
Posting age
6 days ago
Deadline passed
The application window has closed. Check back soon for new opportunities.
Listing
Texas Robotics at the University of Texas at Austin invites applications for a tenure-track faculty position at the rank of Assistant Professor with a tenure home in the Mechanical Engineering department. Outstanding candidates in all areas of Robotics will be considered, with emphasis on novel hardware and control techniques. Successful candidates are expected to pursue an active research program, to teach both graduate and undergraduate courses, and to supervise students in research. The University is fully committed to building a world-class faculty and we welcome candidates who resonate with our core values of learning, discovery, freedom, leadership, individual opportunity, and responsibility. Candidates who are committed to broadening participation in robotics, at all levels, are strongly encouraged.
Location
Austin, Texas, USA
Apply by
Nov 15, 2025
Posted on
Nov 5, 2025
Posting age
6 days ago
One week remaining
You still have time to prepare a submission (deadline Nov 15, 2025), but consider drafting now to avoid the rush.
Listing
Medical instruments such as endoscopes, catheters, and industrial inspection tools are long and thin instruments which typically deploy by translation of their body relative to their environment. This mode of locomotion poses some sets of limitations. Indeed, friction with the environment can cause these tools to damage their environment. This is the case for medical applications such as colonoscopy, for instance, where the pushing action involved in advancing a colonoscope can induce large mechanical stresses on the delicate tissues and cause bleeding. In addition, such instruments may fail to deploy in industrial contexts such as the inspection of a pipe network, due to added friction in successive turns. To solve this challenge, inflatable, bio-inspired robots called “vine” robots have been proposed in the literature. Vine robots are inflatable, bio-inspired robots which grow at the tip to deploy. To have such characteristics, vine robots are composed by a thin tube everted in itself at the tip. When pressurized, the material stored inside, called the vine robot tail, translates and reaches the tip where it everts. The material everted at the tip then forms the vine robot body, which remains stationary with respect to the environment. These robots have been advantageously proposed for medical applications such as the deployment in the vasculature, in the mammary duct, in the intestine, and for industrial and larger scale applications such as growth in granular environments, inspection of archaeological sites, and for search and rescue operations. Most applications require a passageway for tools, i.e. a working channel, in order to provide direct access to the robot tip from the base. This enables tools to be inserted and swapped, in order to perform some tasks. Tasks can include the use of cameras and light sources for site visualization, laser, grippers and cutting tools in surgical applications, or the transmission of water or goods for search and rescue applications. Recently, several research have tackled the inclusion of such working channels in vine growing robots, including recent work of the PI, which enables working channels in miniaturized vine robots, thanks to material scrunching. However, while previous work focused on the deployment of these robots, it was shown in the literature that their retraction remains a significant unsolved challenge. This issue prevents their practical use, as well as their adoption by the industry, and thus presents a major challenge for the adoption of these robots. In particular, while vine robots with working channels seem the most useful from an application perspective, only the retraction of vine robots without working channels has been explored to date. Therefore, the goal of this thesis will be to propose general multi-scale solutions for the retraction of vine growing robots with working channels. Applications in the medical and industrial fields will be proposed to show the benefits of the investigated solutions, in challenging contexts.
Location
Montpellier, France
Apply by
Dec 15, 2024
Posted on
Nov 5, 2025
Posting age
6 days ago
Deadline passed
The application window has closed. Check back soon for new opportunities.
Listing
Medical instruments such as endoscopes, catheters, and industrial inspection tools are long and thin instruments which typically deploy by translation of their body relative to their environment. This mode of locomotion poses some sets of limitations. Indeed, friction with the environment can cause these tools to damage their environment. This is the case for medical applications such as colonoscopy, for instance, where the pushing action involved in advancing a colonoscope can induce large mechanical stresses on the delicate tissues and cause bleeding. In addition, such instruments may fail to deploy in industrial contexts such as the inspection of a pipe network, due to added friction in successive turns. To solve this challenge, inflatable, bio-inspired robots called “vine” robots have been proposed in the literature. Vine robots are inflatable, bio-inspired robots which grow at the tip to deploy. To have such characteristics, vine robots are composed by a thin tube everted in itself at the tip. When pressurized, the material stored inside, called the vine robot tail, translates and reaches the tip where it everts. The material everted at the tip then forms the vine robot body, which remains stationary with respect to the environment. These robots have been advantageously proposed for medical applications such as the deployment in the vasculature, in the mammary duct, in the intestine, and for industrial and larger scale applications such as growth in granular environments, inspection of archaeological sites, and for search and rescue operations. Most applications require a passageway for tools, i.e. a working channel, in order to provide direct access to the robot tip from the base. This enables tools to be inserted and swapped, in order to perform some tasks. Tasks can include the use of cameras and light sources for site visualization, laser, grippers and cutting tools in surgical applications, or the transmission of water or goods for search and rescue applications. Recently, several research have tackled the inclusion of such working channels in vine growing robots, including recent work of the PI, which enables working channels in miniaturized vine robots, thanks to material scrunching. However, while previous work focused on the deployment of these robots, it was shown in the literature that their retraction remains a significant unsolved challenge. This issue prevents their practical use, as well as their adoption by the industry, and thus presents a major challenge for the adoption of these robots. In particular, while vine robots with working channels seem the most useful from an application perspective, only the retraction of vine robots without working channels has been explored to date. Therefore, the goal of this thesis will be to propose general multi-scale solutions for the retraction of vine growing robots with working channels. Applications in the medical and industrial fields will be proposed to show the benefits of the investigated solutions, in challenging contexts.
Location
Montpellier, France
Apply by
Dec 15, 2024
Posted on
Nov 5, 2025
Posting age
6 days ago
Deadline passed
The application window has closed. Check back soon for new opportunities.
Recurring lecture series, cohorts, and thematic programming.