TopicArtificial Intelligence

explainability

Content Overview
4Total items
3Positions
1Seminar

Latest

PositionArtificial Intelligence

Justus Piater

University of Innsbruck, Digital Science Center, Department of Computer Science, Intelligent and Interactive Systems
University of Innsbruck, Austria
Feb 9, 2026

The Intelligent and Interactive Systems lab uses machine learning to enhance the flexibility, robustness, generalization and explainability of robots and vision systems, focusing on methods for learning about structure, function, and other concepts that describe the world in actionable ways. Three University-Assistant Positions involve minor teaching duties with negotiable research topics within the lab's scope. One Project Position involves the integration of robotic perception and execution mechanisms for task-oriented object manipulation in everyday environments, with a focus on affordance-driven object part segmentation and object manipulation using reinforcement learning.

PositionArtificial Intelligence

N/A

Dalle Molle Institute for Artificial Intelligence (IDSIA)
Lugano, Switzerland
Feb 9, 2026

The PhD research focuses on the fairness, explainability, and robustness of machine learning systems within the framework of causal counterfactual analysis using formalisms from probabilistic graphical models, probabilistic circuits, and structural causal models.

PositionArtificial Intelligence

Dr. Robert Legenstein

Graz University of Technology
Austria
Feb 9, 2026

For the recently established Cluster of Excellence CoE Bilateral Artificial Intelligence (BILAI), funded by the Austrian Science Fund (FWF), we are looking for more than 50 PhD students and 10 Post-Doc researchers (m/f/d) to join our team at one of the six leading research institutions across Austria. In BILAI, major Austrian players in Artificial Intelligence (AI) are teaming up to work towards Broad AI. As opposed to Narrow AI, which is characterized by task-specific skills, Broad AI seeks to address a wide array of problems, rather than being limited to a single task or domain. To develop its foundations, BILAI employs a Bilateral AI approach, effectively combining sub-symbolic AI (neural networks and machine learning) with symbolic AI (logic, knowledge representation, and reasoning) in various ways. Harnessing the full potential of both symbolic and sub-symbolic approaches can open new avenues for AI, enhancing its ability to solve novel problems, adapt to diverse environments, improve reasoning skills, and increase efficiency in computation and data use. These key features enable a broad range of applications for Broad AI, from drug development and medicine to planning and scheduling, autonomous traffic management, and recommendation systems. Prioritizing fairness, transparency, and explainability, the development of Broad AI is crucial for addressing ethical concerns and ensuring a positive impact on society. The research team is committed to cross-disciplinary work in order to provide theory and models for future AI and deployment to applications.

SeminarArtificial Intelligence

Seeing things clearly: Image understanding through hard-attention and reasoning with structured knowledges

Jonathan Gerrand
University of the Witwatersrand
Nov 4, 2021

In this talk, Jonathan aims to frame the current challenges of explainability and understanding in ML-driven approaches to image processing, and their potential solution through explicit inference techniques.

explainability coverage

4 items

Position3
Seminar1

Share your knowledge

Know something about explainability? Help the community by contributing seminars, talks, or research.

Contribute content
Domain spotlight

Explore how explainability research is advancing inside Artificial Intelligence.

Visit domain

Cookies

We use essential cookies to run the site. Analytics cookies are optional and help us improve World Wide. Learn more.