Reasoning
reasoning
Pick a domain context
This cross-domain view is for discovery. Choose a domain-scoped topic page for the canonical URL.
Dr. Robert Legenstein
For the recently established Cluster of Excellence CoE Bilateral Artificial Intelligence (BILAI), funded by the Austrian Science Fund (FWF), we are looking for more than 50 PhD students and 10 Post-Doc researchers (m/f/d) to join our team at one of the six leading research institutions across Austria. In BILAI, major Austrian players in Artificial Intelligence (AI) are teaming up to work towards Broad AI. As opposed to Narrow AI, which is characterized by task-specific skills, Broad AI seeks to address a wide array of problems, rather than being limited to a single task or domain. To develop its foundations, BILAI employs a Bilateral AI approach, effectively combining sub-symbolic AI (neural networks and machine learning) with symbolic AI (logic, knowledge representation, and reasoning) in various ways. Harnessing the full potential of both symbolic and sub-symbolic approaches can open new avenues for AI, enhancing its ability to solve novel problems, adapt to diverse environments, improve reasoning skills, and increase efficiency in computation and data use. These key features enable a broad range of applications for Broad AI, from drug development and medicine to planning and scheduling, autonomous traffic management, and recommendation systems. Prioritizing fairness, transparency, and explainability, the development of Broad AI is crucial for addressing ethical concerns and ensuring a positive impact on society. The research team is committed to cross-disciplinary work in order to provide theory and models for future AI and deployment to applications.
N/A
The interdisciplinary M.Sc. Program in Cognitive Systems combines courses from neural/connectionist and symbolic Artificial Intelligence, Machine Learning, and Cognitive Psychology, to explore the fundamentals of perception, attention, learning, mental representation, and reasoning, in humans and machines. The M.Sc. Program is offered jointly by two public universities in Cyprus (the Open University of Cyprus and the University of Cyprus) and has been accredited by the national Quality Assurance Agency. The program is directed by academics from the participating universities, and courses are offered in English via distance learning by an international team of instructors.
Vito Trianni
A fixed-term research position is open for a post-doc, or for a PhD student nearing the end of his doctoral program. The goal of the research is to study hybrid collective intelligence systems for decision support in complex open-ended problems. It involves the design and implementation of a hybrid collective intelligence system to exploit the interaction between human experts and artificial agents based on knowledge graphs and ontologies for knowledge representation, integration and reasoning.
N/A
The position integrates into an attractive environment of existing activities in artificial intelligence such as machine learning for robotics and computer vision, natural language processing, recommender systems, schedulers, virtual and augmented reality, and digital forensics. The candidate should engage in research and teaching in the general area of artificial intelligence. Examples of possible foci include machine learning for pattern recognition, prediction and decision making, data-driven, adaptive, learning and self-optimizing systems, explainable and transparent AI, representation learning; generative models, neuro-symbolic AI, causality, distributed/decentralized learning, environmentally-friendly, sustainable, data-efficient, privacy-preserving AI, neuromorphic computing and hardware aspects, knowledge representations, reasoning, ontologies. Cooperations with research groups at the Department of Computer Science, the Research Areas and in particular the Digital Science Center of the University as well as with business, industry and international research institutions are expected. The candidate should reinforce or complement existing strengths of the Department of Computer Science.
Coraline Rinn Iordan
The University of Rochester’s Department of Brain and Cognitive Sciences seeks to hire an outstanding early-career candidate in the area of Human Cognition. Areas of study may center on any aspect of higher-level cognitive processes such as decision-making, learning and memory, concepts, language and communication, development, reasoning, metacognition, and collective cognition. We particularly welcome applications from candidates researching cognition in human subjects through behavioral, computational or neuroimaging methods. Successful candidates will develop a research program that establishes new collaborations within the department and across the university, and will also be part of a university-wide community engaged in graduate and undergraduate education.
Steve Schneider
The School of Computer Science and Electronic Engineering is seeking to recruit a full-time Lecturer in Natural Language Processing to grow our AI research. The School is home to two established research centres with expertise in AI and Machine Learning: the Computer Science Research Centre and the Centre for Vision, Speech and Signal Processing (CVSSP). This post is aligned to the Nature Inspired Computer and Engineering group within Computer Science. This role encourages applicants from the areas of natural language processing including language modelling, language generation (machine translation/summarisation), explainability and reasoning in NLP, and/or aligned multimodal challenges for NLP (vision-language, audio-language, and so on) and we are particularly interested in candidates who enhance our current strengths and bring complementary areas of AI expertise. Surrey has an established international reputation in AI research, 1st in the UK for computer vision and top 10 for AI, computer vision, machine learning and natural language processing (CSRankings.org) and were 7th in the UK for REF2021 outputs in Computer Science research. Computer Science and CVSSP are at the core of the Surrey Institute for People-Centred AI (PAI), established in 2021 as a pan-University initiative which brings together leading AI research with cross-discipline expertise across health, social, behavioural, and engineering sciences, and business, law, and the creative arts to shape future AI to benefit people and society. PAI leads a portfolio of £100m in grant awards including major research activities in creative industries and healthcare, and two doctoral training programmes with funding for over 100 PhD researchers: the UKRI AI Centre for Doctoral Training in AI for Digital Media Inclusion, and the Leverhulme Trust Doctoral Training Network in AI-Enabled Digital Accessibility.
Brain-Wide Compositionality and Learning Dynamics in Biological Agents
Biological agents continually reconcile the internal states of their brain circuits with incoming sensory and environmental evidence to evaluate when and how to act. The brains of biological agents, including animals and humans, exploit many evolutionary innovations, chiefly modularity—observable at the level of anatomically-defined brain regions, cortical layers, and cell types among others—that can be repurposed in a compositional manner to endow the animal with a highly flexible behavioral repertoire. Accordingly, their behaviors show their own modularity, yet such behavioral modules seldom correspond directly to traditional notions of modularity in brains. It remains unclear how to link neural and behavioral modularity in a compositional manner. We propose a comprehensive framework—compositional modes—to identify overarching compositionality spanning specialized submodules, such as brain regions. Our framework directly links the behavioral repertoire with distributed patterns of population activity, brain-wide, at multiple concurrent spatial and temporal scales. Using whole-brain recordings of zebrafish brains, we introduce an unsupervised pipeline based on neural network models, constrained by biological data, to reveal highly conserved compositional modes across individuals despite the naturalistic (spontaneous or task-independent) nature of their behaviors. These modes provided a scaffolding for other modes that account for the idiosyncratic behavior of each fish. We then demonstrate experimentally that compositional modes can be manipulated in a consistent manner by behavioral and pharmacological perturbations. Our results demonstrate that even natural behavior in different individuals can be decomposed and understood using a relatively small number of neurobehavioral modules—the compositional modes—and elucidate a compositional neural basis of behavior. This approach aligns with recent progress in understanding how reasoning capabilities and internal representational structures develop over the course of learning or training, offering insights into the modularity and flexibility in artificial and biological agents.
Llama 3.1 Paper: The Llama Family of Models
Modern artificial intelligence (AI) systems are powered by foundation models. This paper presents a new set of foundation models, called Llama 3. It is a herd of language models that natively support multilinguality, coding, reasoning, and tool usage. Our largest model is a dense Transformer with 405B parameters and a context window of up to 128K tokens. This paper presents an extensive empirical evaluation of Llama 3. We find that Llama 3 delivers comparable quality to leading language models such as GPT-4 on a plethora of tasks. We publicly release Llama 3, including pre-trained and post-trained versions of the 405B parameter language model and our Llama Guard 3 model for input and output safety. The paper also presents the results of experiments in which we integrate image, video, and speech capabilities into Llama 3 via a compositional approach. We observe this approach performs competitively with the state-of-the-art on image, video, and speech recognition tasks. The resulting models are not yet being broadly released as they are still under development.
Improving Language Understanding by Generative Pre Training
Natural language understanding comprises a wide range of diverse tasks such as textual entailment, question answering, semantic similarity assessment, and document classification. Although large unlabeled text corpora are abundant, labeled data for learning these specific tasks is scarce, making it challenging for discriminatively trained models to perform adequately. We demonstrate that large gains on these tasks can be realized by generative pre-training of a language model on a diverse corpus of unlabeled text, followed by discriminative fine-tuning on each specific task. In contrast to previous approaches, we make use of task-aware input transformations during fine-tuning to achieve effective transfer while requiring minimal changes to the model architecture. We demonstrate the effectiveness of our approach on a wide range of benchmarks for natural language understanding. Our general task-agnostic model outperforms discriminatively trained models that use architectures specifically crafted for each task, significantly improving upon the state of the art in 9 out of the 12 tasks studied. For instance, we achieve absolute improvements of 8.9% on commonsense reasoning (Stories Cloze Test), 5.7% on question answering (RACE), and 1.5% on textual entailment (MultiNLI).
Analogy and Law
Abstracts: https://sites.google.com/site/analogylist/analogical-minds-seminar/analogy-and-law-symposium
Relations and Predictions in Brains and Machines
Humans and animals learn and plan with flexibility and efficiency well beyond that of modern Machine Learning methods. This is hypothesized to owe in part to the ability of animals to build structured representations of their environments, and modulate these representations to rapidly adapt to new settings. In the first part of this talk, I will discuss theoretical work describing how learned representations in hippocampus enable rapid adaptation to new goals by learning predictive representations, while entorhinal cortex compresses these predictive representations with spectral methods that support smooth generalization among related states. I will also cover recent work extending this account, in which we show how the predictive model can be adapted to the probabilistic setting to describe a broader array of generalization results in humans and animals, and how entorhinal representations can be modulated to support sample generation optimized for different behavioral states. In the second part of the talk, I will overview some of the ways in which we have combined many of the same mathematical concepts with state-of-the-art deep learning methods to improve efficiency and performance in machine learning applications like physical simulation, relational reasoning, and design.
Analogical Reasoning and Generalization for Interactive Task Learning in Physical Machines
Humans are natural teachers; learning through instruction is one of the most fundamental ways that we learn. Interactive Task Learning (ITL) is an emerging research agenda that studies the design of complex intelligent robots that can acquire new knowledge through natural human teacher-robot learner interactions. ITL methods are particularly useful for designing intelligent robots whose behavior can be adapted by humans collaborating with them. In this talk, I will summarize our recent findings on the structure that human instruction naturally has and motivate an intelligent system design that can exploit their structure. The system – AILEEN – is being developed using the common model of cognition. Architectures that implement the Common Model of Cognition - Soar, ACT-R, and Sigma - have a prominent place in research on cognitive modeling as well as on designing complex intelligent agents. However, they miss a critical piece of intelligent behavior – analogical reasoning and generalization. I will introduce a new memory – concept memory – that integrates with a common model of cognition architecture and supports ITL.
How Children Design by Analogy: The Role of Spatial Thinking
Analogical reasoning is a common reasoning tool for learning and problem-solving. Existing research has extensively studied children’s reasoning when comparing, or choosing from ready-made analogies. Relatively less is known about how children come up with analogies in authentic learning environments. Design education provides a suitable context to investigate how children generate analogies for creative learning purposes. Meanwhile, the frequent use of visual analogies in design provides an additional opportunity to understand the role of spatial reasoning in design-by-analogy. Spatial reasoning is one of the most studied human cognitive factors and is critical to the learning of science, technology, engineering, arts, and mathematics (STEAM). There is growing interest in exploring the interplay between analogical reasoning and spatial reasoning. In this talk, I will share qualitative findings from a case study, where a class of 11-to-12-year-olds in the Netherlands participated in a biomimicry design project. These findings illustrate (1) practical ways to support children’s analogical reasoning in the ideation process and (2) the potential role of spatial reasoning as seen in children mapping form-function relationships in nature analogically and adaptively to those in human designs.
Cognitive supports for analogical reasoning in rational number understanding
In cognitive development, learning more than the input provides is a central challenge. This challenge is especially evident in learning the meaning of numbers. Integers – and the quantities they denote – are potentially infinite, as are the fractional values between every integer. Yet children’s experiences of numbers are necessarily finite. Analogy is a powerful learning mechanism for children to learn novel, abstract concepts from only limited input. However, retrieving proper analogy requires cognitive supports. In this talk, I seek to propose and examine number lines as a mathematical schema of the number system to facilitate both the development of rational number understanding and analogical reasoning. To examine these hypotheses, I will present a series of educational intervention studies with third-to-fifth graders. Results showed that a short, unsupervised intervention of spatial alignment between integers and fractions on number lines produced broad and durable gains in fractional magnitudes. Additionally, training on conceptual knowledge of fractions – that fractions denote magnitude and can be placed on number lines – facilitates explicit analogical reasoning. Together, these studies indicate that analogies can play an important role in rational number learning with the help of number lines as schemas. These studies shed light on helpful practices in STEM education curricula and instructions.
Analogical inference in mathematics: from epistemology to the classroom (and back)
In this presentation, we will discuss adaptations of historical examples of mathematical research to bring out some of the intuitive judgments that accompany the working practice of mathematicians when reasoning by analogy. The main epistemological claim that we will aim to illustrate is that a central part of mathematical training consists in developing a quasi-perceptual capacity to distinguish superficial from deep analogies. We think of this capacity as an instance of Hadamard’s (1954) discriminating faculty of the mathematical mind, whereby one is led to distinguish between mere “hookings” (77) and “relay-results” (80): on the one hand, suggestions or ‘hints’, useful to raise questions but not to back up conjectures; on the other, more significant discoveries, which can be used as an evidentiary source in further mathematical inquiry. In the second part of the presentation, we will present some recent applications of this epistemological framework to mathematics education projects for middle and high schools in Italy.
Multimodal Blending
In this talk, I’ll consider how new ideas emerge from old ones via the process of conceptual blending. I’ll start by considering analogical reasoning in problem solving and the role conceptual blending plays in these problem-solving contexts. Then I’ll consider blending in multi-modal contexts, including timelines, memes (viz. image macros), and, if time allows, zoom meetings. I suggest mappings analogy researchers have traditionally considered superficial are often important for the development of novel abstractions. Likewise, the analogue portion of multimodal blends anchors their generative capacity. Overall, these observations underscore the extent to which meaning is a socially distributed process whose intermediate products are stored in cognitive artifacts such as text and digital images.
Mechanisms of relational structure mapping across analogy tasks
Following the seminal structure mapping theory by Dedre Gentner, the process of mapping the corresponding structures of relations defining two analogs has been understood as a key component of analogy making. However, not without a merit, in recent years some semantic, pragmatic, and perceptual aspects of analogy mapping attracted primary attention of analogy researchers. For almost a decade, our team have been re-focusing on relational structure mapping, investigating its potential mechanisms across various analogy tasks, both abstract (semantically-lean) and more concrete (semantically-rich), using diverse methods (behavioral, correlational, eye-tracking, EEG). I will present the overview of our main findings. They suggest that structure mapping (1) consists of an incremental construction of the ultimate mental representation, (2) which strongly depends on working memory resources and reasoning ability, (3) even if as little as a single trivial relation needs to be represented mentally. The effective mapping (4) is related to the slowest brain rhythm – the delta band (around 2-3 Hz) – suggesting its highly integrative nature. Finally, we have developed a new task – Graph Mapping – which involves pure mapping of two explicit relational structures. This task allows for precise investigation and manipulation of the mapping process in experiments, as well as is one of the best proxies of individual differences in reasoning ability. Structure mapping is as crucial to analogy as Gentner advocated, and perhaps it is crucial to cognition in general.
Do large language models solve verbal analogies like children do?
Analogical reasoning –learning about new things by relating it to previous knowledge– lies at the heart of human intelligence and creativity and forms the core of educational practice. Children start creating and using analogies early on, making incredible progress moving from associative processes to successful analogical reasoning. For example, if we ask a four-year-old “Horse belongs to stable like chicken belongs to …?” they may use association and reply “egg”, whereas older children will likely give the intended relational response “chicken coop” (or other term to refer to a chicken’s home). Interestingly, despite state-of-the-art AI-language models having superhuman encyclopedic knowledge and superior memory and computational power, our pilot studies show that these large language models often make mistakes providing associative rather than relational responses to verbal analogies. For example, when we asked four- to eight-year-olds to solve the analogy “body is to feet as tree is to …?” they responded “roots” without hesitation, but large language models tend to provide more associative responses such as “leaves”. In this study we examine the similarities and differences between children's and six large language models' (Dutch/multilingual models: RobBERT, BERT-je, M-BERT, GPT-2, M-GPT, Word2Vec and Fasttext) responses to verbal analogies extracted from an online adaptive learning environment, where >14,000 7-12 year-olds from the Netherlands solved 20 or more items from a database of 900 Dutch language verbal analogies.
Learning by Analogy in Mathematics
Analogies between old and new concepts are common during classroom instruction. While previous studies of transfer focus on how features of initial learning guide later transfer to new problem solving, less is known about how to best support analogical transfer from previous learning while children are engaged in new learning episodes. Such research may have important implications for teaching and learning in mathematics, which often includes analogies between old and new information. Some existing research promotes supporting learners' explicit connections across old and new information within an analogy. In this talk, I will present evidence that instructors can invite implicit analogical reasoning through warm-up activities designed to activate relevant prior knowledge. Warm-up activities "close the transfer space" between old and new learning without additional direct instruction.
Learning Relational Rules from Rewards
Humans perceive the world in terms of objects and relations between them. In fact, for any given pair of objects, there is a myriad of relations that apply to them. How does the cognitive system learn which relations are useful to characterize the task at hand? And how can it use these representations to build a relational policy to interact effectively with the environment? In this paper we propose that this problem can be understood through the lens of a sub-field of symbolic machine learning called relational reinforcement learning (RRL). To demonstrate the potential of our approach, we build a simple model of relational policy learning based on a function approximator developed in RRL. We trained and tested our model in three Atari games that required to consider an increasingly number of potential relations: Breakout, Pong and Demon Attack. In each game, our model was able to select adequate relational representations and build a relational policy incrementally. We discuss the relationship between our model with models of relational and analogical reasoning, as well as its limitations and future directions of research.
AI-assisted language learning: Assessing learners who memorize and reason by analogy
Vocabulary learning applications like Duolingo have millions of users around the world, but yet are based on very simple heuristics to choose teaching material to provide to their users. In this presentation, we will discuss the possibility to develop more advanced artificial teachers, which would be based on modeling of the learner’s inner characteristics. In the case of teaching vocabulary, understanding how the learner memorizes is enough. When it comes to picking grammar exercises, it becomes essential to assess how the learner reasons, in particular by analogy. This second application will illustrate how analogical and case-based reasoning can be employed in an alternative way in education: not as the teaching algorithm, but as a part of the learner’s model.
Is Theory of Mind Analogical? Evidence from the Analogical Theory of Mind cognitive model
Theory of mind, which consists of reasoning about the knowledge, belief, desire, and similar mental states of others, is a key component of social reasoning and social interaction. While it has been studied by cognitive scientists for decades, none of the prevailing theories of the processes that underlie theory of mind reasoning and development explain the breadth of experimental findings. I propose that this is because theory of mind is, like much of human reasoning, inherently analogical. In this talk, I will discuss several theory of mind findings from the psychology literature, the challenges they pose for our understanding of theory of mind, and bring in evidence from the Analogical Theory of Mind (AToM) cognitive model that demonstrates how these findings fit into an analogical understanding of theory of mind reasoning.
Analogy and Spatial Cognition: How and Why they matter for STEM learning
Space is the universal donor for relations" (Gentner, 2014). This quote is the foundation of my talk. I will explore how and why visual representations and analogies are related, and why. I will also explore how considering the relation between analogy and spatial reasoning can shed light on why and how spatial thinking is correlated with learning in STEM fields. For example, I will consider children’s numbers sense and learning of the number line from the perspective of analogical reasoning.
Analogical retrieval across disparate task domains
Previous experiments have shown that a comparison of two written narratives highlights their shared relational structure, which in turn facilitates the retrieval of analogous narratives from the past (e.g., Gentner, Loewenstein, Thompson, & Forbus, 2009). However, analogical retrieval occurs across domains that appear more conceptually distant than merely different narratives, and the deepest analogies use matches in higher-order relational structure. The present study investigated whether comparison can facilitate analogical retrieval of higher-order relations across written narratives and abstract symbolic problems. Participants read stories which became retrieval targets after a delay, cued by either analogous stories or letter-strings. In Experiment 1 we replicated Gentner et al. who used narrative retrieval cues, and also found preliminary evidence for retrieval between narrative and symbolic domains. In Experiment 2 we found clear evidence that a comparison of analogous letter-string problems facilitated the retrieval of source stories with analogous higher-order relations. Experiment 3 replicated the retrieval results of Experiment 2 but with a longer delay between encoding and recall, and a greater number of distractor source stories. These experiments offer support for the schema induction account of analogical retrieval (Gentner et al., 2009) and show that the schemas abstracted from comparison of narratives can be transferred to non-semantic symbolic domains.
Analogy Use in Parental Explanation
How and why are analogies spontaneously generated? Despite the prominence of analogy in learning and reasoning, there is little research on whether and how analogy is spontaneously generated in everyday settings. Here we fill this gap by gathering parents' answers to children's real questions, and examining analogy use in parental explanations. Study 1 found that parents used analogy spontaneously in their explanations, despite no prompt nor mention of analogy in the instruction. Study 2 found that these analogical explanations were rated highly by parents, schoolteachers, and university students alike. In Study 3, six-year-olds also rated good analogical explanations highly, but unlike their parents, did not rate them higher than causal, non-analogical explanations. We discuss what makes an analogy a good explanation, and how theories from both explanation and analogy research explain one’s motivation for spontaneously generating analogies.
Exploration-Based Approach for Computationally Supported Design-by-Analogy
Engineering designers practice design-by-analogy (DbA) during concept generation to retrieve knowledge from external sources or memory as inspiration to solve design problems. DbA is a tool for innovation that involves retrieving analogies from a source domain and transferring the knowledge to a target domain. While DbA produces innovative results, designers often come up with analogies by themselves or through serendipitous, random encounters. Computational support systems for searching analogies have been developed to facilitate DbA in systematic design practice. However, many systems have focused on a query-based approach, in which a designer inputs a keyword or a query function and is returned a set of algorithmically determined stimuli. In this presentation, a new analogical retrieval process that leverages a visual interaction technique is introduced. It enables designers to explore a space of analogies, rather than be constrained by what’s retrieved by a query-based algorithm. With an exploration-based DbA tool, designers have the potential to uncover more useful and unexpected inspiration for innovative design solutions.
Semantic Distance and Beyond: Interacting Predictors of Verbal Analogy Performance
Prior studies of A:B::C:D verbal analogies have identified several factors that affect performance, including the semantic similarity between source and target domains (semantic distance), the semantic association between the C-term and incorrect answers (distracter salience), and the type of relations between word pairs (e.g., categorical, compositional, and causal). However, it is unclear how these stimulus properties affect performance when utilized together. Moreover, how do these item factors interact with individual differences such as crystallized intelligence and creative thinking? Several studies reveal interactions among these item and individual difference factors impacting verbal analogy performance. For example, a three-way interaction demonstrated that the effects of semantic distance and distracter salience had a greater impact on performance for compositional and causal relations than for categorical ones (Jones, Kmiecik, Irwin, & Morrison, 2022). Implications for analogy theories and future directions are discussed.
Feedforward and feedback processes in visual recognition
Progress in deep learning has spawned great successes in many engineering applications. As a prime example, convolutional neural networks, a type of feedforward neural networks, are now approaching – and sometimes even surpassing – human accuracy on a variety of visual recognition tasks. In this talk, however, I will show that these neural networks and their recent extensions exhibit a limited ability to solve seemingly simple visual reasoning problems involving incremental grouping, similarity, and spatial relation judgments. Our group has developed a recurrent network model of classical and extra-classical receptive field circuits that is constrained by the anatomy and physiology of the visual cortex. The model was shown to account for diverse visual illusions providing computational evidence for a novel canonical circuit that is shared across visual modalities. I will show that this computational neuroscience model can be turned into a modern end-to-end trainable deep recurrent network architecture that addresses some of the shortcomings exhibited by state-of-the-art feedforward networks for solving complex visual reasoning tasks. This suggests that neuroscience may contribute powerful new ideas and approaches to computer science and artificial intelligence.
From the Didactic to the Heuristic Use of Analogies in Science Teaching
Extensive research on science teaching has shown the effectiveness of analogies as a didactic tool which, when appropriately and effectively used, facilitates the learning process of abstract concepts. This seminar does not contradict the efficacy of such a didactic use of analogies in this seminar but switches attention and interest on their heuristic use in approaching and understanding of what previously unknown. Such a use of analogies derives from research with 10 to 17 year-olds, who, when asked to make predictions in novel situations and to then provide explanations about these predictions, they self-generated analogies and used them by reasoning on their basis. This heuristic use of analogies can be used in science teaching in revealing how students approach situations they have not considered before as well as the sources they draw upon in doing so.
Children’s inference of verb meanings: Inductive, analogical and abductive inference
Children need inference in order to learn the meanings of words. They must infer the referent from the situation in which a target word is said. Furthermore, to be able to use the word in other situations, they also need to infer what other referents the word can be generalized to. As verbs refer to relations between arguments, verb learning requires relational analogical inference, something which is challenging to young children. To overcome this difficulty, young children recruit a diverse range of cues in their inference of verb meanings, including, but not limited to, syntactic cues and social and pragmatic cues as well as statistical cues. They also utilize perceptual similarity (object similarity) in progressive alignment to extract relational verb meanings and further to gain insights about relational verb meanings. However, just having a list of these cues is not useful: the cues must be selected, combined, and coordinated to produce the optimal interpretation in a particular context. This process involves abductive reasoning, similar to what scientists do to form hypotheses from a range of facts or evidence. In this talk, I discuss how children use a chain of inferences to learn meanings of verbs. I consider not only the process of analogical mapping and progressive alignment, but also how children use abductive inference to find the source of analogy and gain insights into the general principles underlying verb learning. I also present recent findings from my laboratory that show that prelinguistic human infants use a rudimentary form of abductive reasoning, which enables the first step of word learning.
A new experimental paradigm to study analogy transfer
Analogical reasoning is one of the most complex cognitive functions in humans that allows abstract thinking, high-level reasoning, and learning. Based on analogical reasoning, one can extract an abstract and general concept (i.e., an analogy schema) from a familiar situation and apply it to a new context or domain (i.e., analogy transfer). These processes allow us to solve problems we never encountered before and generate new ideas. However, the place of analogy transfer in problem solving mechanisms is unclear. This presentation will describe several experiments with three main findings. First, we show how analogy transfer facilitates problem-solving, replicating existing empirical data largely based on the radiation/fortress problems with four new riddles. Second, we propose a new experimental task that allows us to quantify analogy transfer. Finally, using science network methodology, we show how restructuring the mental representation of a problem can predict successful solving of an analogous problem. These results shed new light on the cognitive mechanism underlying solution transfer by analogy and provide a new tool to quantify individual abilities.
Assessing the potential for learning analogy problem-solving: does EF play a role?
Analogical reasoning is related to everyday learning and scholastic learning and is a robust predictor of g. Therefore, children's ability to reason by analogy is often measured in a school context to gain insight into children's cognitive and intellectual functioning. Often, the ability to reason by analogy is measured by means of conventional, static instruments. Static tests are criticised by researchers and practitioners to provide an overview of what individuals have learned in the past and for this reason are assumed not to tap into the potential for learning, based on Vygotsky's zone of proximal development. This seminar will focus on children's potential for reasoning by analogy, as measured by means of a dynamic test, which has a test-training-test design. In so doing, the potential relationship between dynamic test outcomes and executive functioning will be explored.
Symposium on cross-cultural research in analogical reasoning
Abstracts: https://www.sites.google.com/site/analogylist/cross-cultural-symposium
Do Capuchin Monkeys, Chimpanzees and Children form Overhypotheses from Minimal Input? A Hierarchical Bayesian Modelling Approach
Abstract concepts are a powerful tool to store information efficiently and to make wide-ranging predictions in new situations based on sparse data. Whereas looking-time studies point towards an early emergence of this ability in human infancy, other paradigms like the relational match to sample task often show a failure to detect abstract concepts like same and different until the late preschool years. Similarly, non-human animals have difficulties solving those tasks and often succeed only after long training regimes. Given the huge influence of small task modifications, there is an ongoing debate about the conclusiveness of these findings for the development and phylogenetic distribution of abstract reasoning abilities. Here, we applied the concept of “overhypotheses” which is well known in the infant and cognitive modeling literature to study the capabilities of 3 to 5-year-old children, chimpanzees, and capuchin monkeys in a unified and more ecologically valid task design. In a series of studies, participants themselves sampled reward items from multiple containers or witnessed the sampling process. Only when they detected the abstract pattern governing the reward distributions within and across containers, they could optimally guide their behavior and maximize the reward outcome in a novel test situation. We compared each species’ performance to the predictions of a probabilistic hierarchical Bayesian model capable of forming overhypotheses at a first and second level of abstraction and adapted to their species-specific reward preferences.
Implementing structure mapping as a prior in deep learning models for abstract reasoning
Building conceptual abstractions from sensory information and then reasoning about them is central to human intelligence. Abstract reasoning both relies on, and is facilitated by, our ability to make analogies about concepts from known domains to novel domains. Structure Mapping Theory of human analogical reasoning posits that analogical mappings rely on (higher-order) relations and not on the sensory content of the domain. This enables humans to reason systematically about novel domains, a problem with which machine learning (ML) models tend to struggle. We introduce a two-stage neural net framework, which we label Neural Structure Mapping (NSM), to learn visual analogies from Raven's Progressive Matrices, an abstract visual reasoning test of fluid intelligence. Our framework uses (1) a multi-task visual relationship encoder to extract constituent concepts from raw visual input in the source domain, and (2) a neural module net analogy inference engine to reason compositionally about the inferred relation in the target domain. Our NSM approach (a) isolates the relational structure from the source domain with high accuracy, and (b) successfully utilizes this structure for analogical reasoning in the target domain.
Analogical Reasoning with Neuro-Symbolic AI
Knowledge discovery with computers requires a huge amount of search. Analogical reasoning is effective for efficient knowledge discovery. Therefore, we proposed analogical reasoning systems based on first-order predicate logic using Neuro-Symbolic AI. Neuro-Symbolic AI is a combination of Symbolic AI and artificial neural networks and has features that are easy for human interpretation and robust against data ambiguity and errors. We have implemented analogical reasoning systems by Neuro-symbolic AI models with word embedding which can represent similarity between words. Using the proposed systems, we efficiently extracted unknown rules from knowledge bases described in Prolog. The proposed method is the first case of analogical reasoning based on the first-order predicate logic using deep learning.
Towards a recipe for physical reasoning in humans and machines
Reasoning Ability: Neural Mechanisms, Development, and Plasticity
Relational thinking, or the process of identifying and integrating relations between mental representations, is regularly invoked during reasoning. This mental capacity enables us to draw higher-order abstractions and generalize across situations and contexts, and we have argued that it should be included in the pantheon of executive functions. In this talk, I will briefly review our lab's work characterizing the roles of lateral prefrontal and parietal regions in relational thinking. I will then discuss structural and functional predictors of individual differences and developmental changes in reasoning.
Do we reason differently about affectively charged analogies? Insights from EEG research
Affectively charged analogies are commonly used in literature and art, but also in politics and argumentation. There are reasons to think we may process these analogies differently. Notably, analogical reasoning is a complex process that requires the use of cognitive resources, which are limited. In the presence of affectively charged content, some of these resources might be directed towards affective processing and away from analogical reasoning. To investigate this idea, I investigated effects of affective charge on differences in brain activity evoked by sound versus unsound analogies. The presentation will detail the methods and results for two such experiments, one in which participants saw analogies formed of neutral and negative words and one in which they were created by combining conditioned symbols. I will also briefly discuss future research aiming to investigate the effects of analogical reasoning on brain activity related to affective processing.
Understanding and Enhancing Creative Analogical Reasoning
This talk will focus on our lab's extensive research on understanding and enhancing creative analogical reasoning. I will cover the development of the analogy finding matrix task, evidence for conscious augmentation of creative state during this task, and the real-world implications this ability has for college STEM education. I will also discuss recent research aimed at enhancing performance on this creative analogical reasoning task using both transcranial direct current stimulation (tDCS) and transcranial alternating current stimulation (tACS).
The Limits of Causal Reasoning in Human and Machine Learning
A key purpose of causal reasoning by individuals and by collectives is to enhance action, to give humans yet more control over their environment. As a result, causal reasoning serves as the infrastructure of both thought and discourse. Humans represent causal systems accurately in some ways, but also show some systematic biases (we tend to neglect causal pathways other than the one we are thinking about). Even when accurate, people’s understanding of causal systems tends to be superficial; we depend on our communities for most of our causal knowledge and reasoning. Nevertheless, we are better causal reasoners than machines. Modern machine learners do not come close to matching human abilities.
Scaffolding up from Social Interactions: A proposal of how social interactions might shape learning across development
Social learning and analogical reasoning both provide exponential opportunities for learning. These skills have largely been studied independently, but my future research asks how combining skills across previously independent domains could add up to more than the sum of their parts. Analogical reasoning allows individuals to transfer learning between contexts and opens up infinite opportunities for innovation and knowledge creation. Its origins and development, so far, have largely been studied in purely cognitive domains. Constraining analogical development to non-social domains may mistakenly lead researchers to overlook its early roots and limit ideas about its potential scope. Building a bridge between social learning and analogy could facilitate identification of the origins of analogical reasoning and broaden its far-reaching potential. In this talk, I propose that the early emergence of social learning, its saliency, and its meaningful context for young children provides a springboard for learning. In addition to providing a strong foundation for early analogical reasoning, the social domain provides an avenue for scaling up analogies in order to learn to learn from others via increasingly complex and broad routes.
Why Some Intelligent Agents are Conscious
In this talk I will present an account of how an agent designed or evolved to be intelligent may come to enjoy subjective experiences. First, the agent is stipulated to be capable of (meta)representing subjective ‘qualitative’ sensory information, in the sense that it can easily assess how exactly similar a sensory signal is to all other possible sensory signals. This information is subjective in the sense that it concerns how the different stimuli can be distinguished by the agent itself, rather than how physically similar they are. For this to happen, sensory coding needs to satisfy sparsity and smoothness constraints, which are known to facilitate metacognition and generalization. Second, this qualitative information can under some specific circumstances acquire an ‘assertoric force’. This happens when a certain self-monitoring mechanism decides that the qualitative information reliably tracks the current state of the world, and informs a general symbolic reasoning system of this fact. I will argue that the having of subjective conscious experiences amounts to nothing more than having qualitative sensory information acquiring an assertoric status within one’s belief system. When this happens, the perceptual content presents itself as reflecting the state of the world right now, in ways that seem undeniably rational to the agent. At the same time, without effort, the agent also knows what the perceptual content is like, in terms of how subjectively similar it is to all other possible precepts. I will discuss the computational benefits of this architecture, for which consciousness might have arisen as a byproduct.
Causal Reasoning: Its role in the architecture and development of the mind
The seminar will first outline the architecture of the human mind, specifying general and domain-specific mental processes. The place of causal reasoning and its relations with the other processes will be specified. Experimental, psychometric, developmental, and brain-based evidence will be summarized. The main message of the talk is that causal thought involves domain-specific core processes rooted in perception and served by special brain networks which capture interactions between objects. With development, causal reasoning is increasingly associated with a general abstraction system which generates general principles underlying inductive, analogical, and deductive reasoning and also heuristics for specifying causal relations. These associations are discussed in some detail. Possible implications for artificial intelligence and educational implications are also discussed.
Seeing things clearly: Image understanding through hard-attention and reasoning with structured knowledges
In this talk, Jonathan aims to frame the current challenges of explainability and understanding in ML-driven approaches to image processing, and their potential solution through explicit inference techniques.
A Functional Approach to Analogical Reasoning in Scientific Practice
The talk argues for a new approach to analysing analogical reasoning in scientific practice. Traditionally, philosophers of science tend to analyse analogical reasoning in either a top-down way or a bottom-up way. Examples of top-down approaches include Mary Hesse’s seminal work (1963) and Paul Bartha’s articulation model (2010), while most popular bottom-up approach is John Norton’s material approach (2018). I will address the problems of these traditional approaches and introduce an alternative approach, which is motivated by my exemplar-based approach to the history of science, defended in my recent book (2020).
Towards a Theory of Human Visual Reasoning
Many tasks that are easy for humans are difficult for machines. In particular, while humans excel at tasks that require generalising across problems, machine systems notably struggle. One such task that has received a good amount of attention is the Synthetic Visual Reasoning Test (SVRT). The SVRT consists of a range of problems where simple visual stimuli must be categorised into one of two categories based on an unknown rule that must be induced. Conventional machine learning approaches perform well only when trained to categorise based on a single rule and are unable to generalise without extensive additional training to tasks with any additional rules. Multiple theories of higher-level cognition posit that humans solve such tasks using structured relational representations. Specifically, people learn rules based on structured representations that generalise to novel instances quickly and easily. We believe it is possible to model this approach in a single system which learns all the required relational representations from scratch and performs tasks such as SVRT in a single run. Here, we present a system which expands the DORA/LISA architecture and augments the existing model with principally novel components, namely a) visual reasoning based on the established theories of recognition by components; b) the process of learning complex relational representations by synthesis (in addition to learning by analysis). The proposed augmented model matches human behaviour on SVRT problems. Moreover, the proposed system stands as perhaps a more realistic account of human cognition, wherein rather than using tools that has been shown successful in the machine learning field to inform psychological theorising, we use established psychological theories to inform developing a machine system.
Children's relational noun generalization strategies
A common result is that comparison settings (i.e., several stimuli introduced simultaneously) favor conceptualization and generalization. However still little is known of the solving strategies used by children to compare and generalize novel words. Understanding the temporal dynamics of children’s solving strategies may help assess which processes underlie generalization. We tested children in noun and relational noun generalization tasks and collected eye tracking data. To analyze and interpret the data we followed predictions made by existing models of analogical reasoning and generalization. The data reveals clear patterns of exploration in which participants compare learning items before searching for a solution. Analyses of the beginning of trials show that early comparisons favor generalization and that errors may be caused by a lake of early comparison. Children then pursue their search in different ways according to the task. In this presentation I will present the generalization strategies revealed by eye tracking, compare the strategies from both tasks and confront them to existing models.
Psychological essentialism in working memory research
Psychological essentialism is ubiquitous. It is one of primary bases of thoughts and behaviours throughout our entire lifetime. Human's such characteristics that find an unseen hidden entity behind observable phenomena or exemplars, however, lead us to somehow biased thinking and reasoning even in the realm of science, including psychology. For example, a latent variable extracted from various measurements is just a statistical property calculated in structural equation modeling, therefore, is not necessary to be a fundamental reality. Yet, we occasionally feel that there is the essential nature of such a psychological construct a priori. This talk will demonstrate examples of psychological essentialism in psychology and examine its resultant influences on working memory related issues, e. g., working memory training. Such demonstration, examination, and subsequent discussions on these topics will provide us an opportunity to reconsider the concept of working memory.
Beyond the binding problem: From basic affordances to symbolic thought
Human cognitive abilities seem qualitatively different from the cognitive abilities of other primates, a difference Penn, Holyoak, and Povinelli (2008) attribute to role-based relational reasoning—inferences and generalizations based on the relational roles to which objects (and other relations) are bound, rather than just the features of the objects themselves. Role-based relational reasoning depends on the ability to dynamically bind arguments to relational roles. But dynamic binding cannot be sufficient for relational thinking: Some non-human animals solve the dynamic binding problem, at least in some domains; and many non-human species generalize affordances to completely novel objects and scenes, a kind of universal generalization that likely depends on dynamic binding. If they can solve the dynamic binding problem, then why can they not reason about relations? What are they missing? I will present simulations with the LISA model of analogical reasoning (Hummel & Holyoak, 1997, 2003) suggesting that the missing pieces are multi-role integration (the capacity to combine multiple role bindings into complete relations) and structure mapping (the capacity to map different systems of role bindings onto one another). When LISA is deprived of either of these capacities, it can still generalize affordances universally, but it cannot reason symbolically; granted both abilities, LISA enjoys the full power of relational (symbolic) thought. I speculate that one reason it may have taken relational reasoning so long to evolve is that it required evolution to solve both problems simultaneously, since neither multi-role integration nor structure mapping appears to confer any adaptive advantage over simple role binding on its own.
Analogical Reasoning Plus: Why Dissimilarities Matter
Analogical reasoning remains foundational to the human ability to forge meaningful patterns within the sea of information that continually inundates the senses. Yet, meaningful patterns rely not only on the recognition of attributional similarities but also dissimilarities. Just as the perception of images rests on the juxtaposition of lightness and darkness, reasoning relationally requires systematic attention to both similarities and dissimilarities. With that awareness, my colleagues and I have expanded the study of relational reasoning beyond analogous reasoning and attributional similarities to highlight forms based on the nature of core dissimilarities: anomalous, antinomous, and antithetical reasoning. In this presentation, I will delineate the character of these relational reasoning forms; summarize procedures and measures used to assess them; overview key research findings; and describe how the forms of relational reasoning work together in the performance of complex problem solving. Finally, I will share critical next steps for research which has implications for instructional practice.
Metacognition for past and future decision making in primates
As Socrates said that "I know that I know nothing," our mind's function to be aware of our ignorance is essential for abstract and conceptual reasoning. However, the biological mechanism to enable such a hierarchical thought, or meta-cognition, remained unknown. In the first part of the talk, I will demonstrate our studies on the neural mechanism for metacognition on memory in macaque monkeys. In reality, awareness of ignorance is essential not only for the retrospection of the past but also for the exploration of novel unfamiliar environments for the future. However, this proactive feature of metacognition has been understated in neuroscience. In the second part of the talk, I will demonstrate our studies on the neural mechanism for prospective metacognitive matching among uncertain options prior to perceptual decision making in humans and monkeys. These studies converge to suggest that higher-order processes to self-evaluate mental state either retrospectively or prospectively are implemented in the primate neural networks.
Conceptual Change Induced by Analogical Reasoning Sparks “Aha!” Moments
Although analogical reasoning has been assumed to involve insight and its associated “aha!” experience, the relationship between these phenomena has never been directly probed empirically. In this study we investigated the relationship between representational change and the “aha!” experience during analogical reasoning. A novel set of verbal analogy stimuli were developed for use as an insight task. Across two experiments, participants reported significantly stronger aha moments and showed greater evidence of representational change on trials with more semantically distant analogies. Further, the strength of reported aha moments was correlated with the degree to which participants’ descriptions of the analogies changed over the course of each trial. Lastly, we probed the individual differences associated with a tendency to report stronger "aha" experiences, particularly related to mood, curiosity, and reward responsiveness. The findings shed light on the affective components of analogical reasoning and suggest that measuring affective responses during such tasks may elucidate novel insights into the mechanisms of creative analogical reasoning.
Differential working memory functioning
The integrated conflict monitoring theory of Botvinick introduced cognitive demand into conflict monitoring research. We investigated effects of individual differences of cognitive demand and another determinant of conflict monitoring entitled reinforcement sensitivity on conflict monitoring. We showed evidence of differential variability of conflict monitoring intensity using the electroencephalogram (EEG), functional magnet resonance imaging (fMRI) and behavioral data. Our data suggest that individual differences of anxiety and reasoning ability are differentially related to the recruitment of proactive and reactive cognitive control (cf. Braver). Based on previous findings, the team of the Leue-Lab investigated new psychometric data on conflict monitoring and proactive-reactive cognitive control. Moreover, data of the Leue-Lab suggest the relevance of individual differences of conflict monitoring for the context of deception. In this respect, we plan new studies highlighting individual differences of the functioning of the Anterior Cingulate Cortex (ACC). Disentangling the role of individual differences in working memory-related cognitive demand, mental effort, and reinforcement-related processes opens new insights for cognitive-motivational approaches of information processing (Passcode to rewatch: 0R8v&m59).
Achieving Abstraction: Early Competence & the Role of the Learning Context
Children's emerging ability to acquire and apply relational same-different concepts is often cited as a defining feature of human cognition, providing the foundation for abstract thought. Yet, young learners often struggle to ignore irrelevant surface features to attend to structural similarity instead. I will argue that young children have--and retain--genuine relational concepts from a young age, but tend to neglect abstract similarity due to a learned bias to attend to objects and their properties. Critically, this account predicts that differences in the structure of children's environmental input should lead to differences in the type of hypotheses they privilege and apply. I will review empirical support for this proposal that has (1) evaluated the robustness of early competence in relational reasoning, (2) identified cross-cultural differences in relational and object bias, and (3) provided evidence that contextual factors play a causal role in relational reasoning. Together, these studies suggest that the development of abstract thought may be more malleable and context-sensitive than initially believed.
Zero-shot visual reasoning with probabilistic analogical mapping
There has been a recent surge of interest in the question of whether and how deep learning algorithms might be capable of abstract reasoning, much of which has centered around datasets based on Raven’s Progressive Matrices (RPM), a visual analogy problem set commonly employed to assess fluid intelligence. This has led to the development of algorithms that are capable of solving RPM-like problems directly from pixel-level inputs. However, these algorithms require extensive direct training on analogy problems, and typically generalize poorly to novel problem types. This is in stark contrast to human reasoners, who are capable of solving RPM and other analogy problems zero-shot — that is, with no direct training on those problems. Indeed, it’s this capacity for zero-shot reasoning about novel problem types, i.e. fluid intelligence, that RPM was originally designed to measure. I will present some results from our recent efforts to model this capacity for zero-shot reasoning, based on an extension of a recently proposed approach to analogical mapping we refer to as Probabilistic Analogical Mapping (PAM). Our RPM model uses deep learning to extract attributed graph representations from pixel-level inputs, and then performs alignment of objects between source and target analogs using gradient descent to optimize a graph-matching objective. This extended version of PAM features a number of new capabilities that underscore the flexibility of the overall approach, including 1) the capacity to discover solutions that emphasize either object similarity or relation similarity, based on the demands of a given problem, 2) the ability to extract a schema representing the overall abstract pattern that characterizes a problem, and 3) the ability to directly infer the answer to a problem, rather than relying on a set of possible answer choices. This work suggests that PAM is a promising framework for modeling human zero-shot reasoning.
Probabilistic Analogical Mapping with Semantic Relation Networks
Hongjing Lu will present a new computational model of Probabilistic Analogical Mapping (PAM, in collaboration with Nick Ichien and Keith Holyoak) that finds systematic correspondences between inputs generated by machine learning. The model adopts a Bayesian framework for probabilistic graph matching, operating on semantic relation networks constructed from distributed representations of individual concepts (word embeddings created by Word2vec) and of relations between concepts (created by our BART model). We have used PAM to simulate a broad range of phenomena involving analogical mapping by both adults and children. Our approach demonstrates that human-like analogical mapping can emerge from comparison mechanisms applied to rich semantic representations of individual concepts and relations. More details can be found https://arxiv.org/ftp/arxiv/papers/2103/2103.16704.pdf
A role for cognitive maps in metaphors and analogy?
In human and non-human animals, conceptual knowledge is partially organized according to low-dimensional geometries that rely on brain structures and computations involved in spatial representations. Recently, two separate lines of research have investigated cognitive maps, that are associated with the hippocampal formation and are similar to world-centered representations of the environment, and image spaces, that are associated with the parietal cortex and are similar to self-centered spatial relationships. I will suggest that cognitive maps and image spaces may be two manifestations of a more general propensity of the mind to create low-dimensional internal models, and may play a role in analogical reasoning and metaphorical thinking. Finally, I will show some data suggesting that the metaphorical relationship between colors and emotions can be accounted for by the structural alignment of low-dimensional conceptual spaces.
Representations of abstract relations in infancy
Abstract relations are considered the pinnacle of human cognition, allowing analogical and logical reasoning, and possibly setting humans apart from other animal species. Such relations cannot be represented in a perceptual code but can easily be represented in a propositional language of thought, where relations between objects are represented by abstract discrete symbols. Focusing on the abstract relations same and different, I will show that (1) there is a discontinuity along ontogeny with respect to the representations of abstract relations, but (2) young infants already possess representations of same and different. Finally, (3) I will investigate the format of representation of abstract relations in young infants, arguing that those representations are not discrete, but rather built by juxtaposing abstract representations of entities.
Models of Core Knowledge (Physics, Really)
Even young children seem to have an early understanding of the world around them, and the people in it. Before children can reliably say "ball", "wall", or "Saul", they expect balls to not go through walls, and for Saul to go right for a ball (if there's no wall). What is the formal conceptual structure underlying this commonsense reasoning about objects and agents? I will raise several possibilities for models underlying core intuitive physics as a way of talking about models of core knowledge and intuitive theories more generally. In particular, I will present some recent ML work trying to capture early expectations about object solidly, cohesion, and permanence, that relies on a rough-derendering approach.
Transforming task representations
Humans can adapt to a novel task on our first try. By contrast, artificial intelligence systems often require immense amounts of data to adapt. In this talk, I will discuss my recent work (https://www.pnas.org/content/117/52/32970) on creating deep learning systems that can adapt on their first try by exploiting relationships between tasks. Specifically, the approach is based on transforming a representation for a known task to produce a representation for the novel task, by inferring and then using a higher order function that captures a relationship between the tasks. This approach can be interpreted as a type of analogical reasoning. I will show that task transformation can allow systems to adapt to novel tasks on their first try in domains ranging from card games, to mathematical objects, to image classification and reinforcement learning. I will discuss the analogical interpretation of this approach, an analogy between levels of abstraction within the model architecture that I refer to as homoiconicity, and what this work might suggest about using deep-learning models to infer analogies more generally.
Analogical reasoning and metaphor processing in autism - Similarities & differences
In this talk, I will present the results of two recent systematic reviews and meta-analyses related to analogical reasoning and metaphor processing in autism, together with the results of a study that investigated verbal analogical reasoning and metaphor processing in the same sample of participants. Both metaphors and analogies rely on exploiting similarities, and they necessitate contextual processing. Nevertheless, our findings relating to metaphor processing and analogical reasoning showed distinct patterns. Whereas analogical reasoning emerged as a relative strength in autism, metaphor processing was found to be a relative weakness. Additionally, both meta-analytic studies investigated the relations between the level of intelligence of participants included in the studies, and the effect size of group differences between the autistic and typically developing (TD) samples. These analyses suggested in the case of analogical reasoning that the relative advantage of ASD participants might only be present in the case of individuals with lower levels of intelligence. By contrast, impairments in metaphor processing appeared to be more pronounced in the case of individuals with relatively lower levels of (verbal) intelligence. In our experimental study, we administered both verbal analogies and metaphors to the same sample of high-functioning autistic participants and TD controls. The two groups were matched on age, verbal IQ, working memory and educational background. Our aim was to understand better the similarities and differences between processing analogies and metaphors, and to see whether the advantage in analogical reasoning and disadvantage in metaphor processing is universal in autism.
Structure-mapping in Human Learning
Across species, humans are uniquely able to acquire deep relational systems of the kind needed for mathematics, science, and human language. Analogical comparison processes are a major contributor to this ability. Analogical comparison engages a structure-mapping process (Gentner, 1983) that fosters learning in at least three ways: first, it highlights common relational systems and thereby promotes abstraction; second, it promotes inferences from known situations to less familiar situations; and, third, it reveals potentially important differences between examples. In short, structure-mapping is a domain-general learning process by which abstract, portable knowledge can arise from experience. It is operative from early infancy on, and is critical to the rapid learning we see in human children. Although structure-mapping processes are present pre-linguistically, their scope is greatly amplified by language. Analogical processes are instrumental in learning relational language, and the reverse is also true: relational language acts to preserve relational abstractions and render them accessible for future learning and reasoning. Although structure-mapping processes are present pre-linguistically, their scope is greatly amplified by language. Analogical processes are instrumental in learning relational language, and the reverse is also true: relational language acts to preserve relational abstractions and render them accessible for future learning and reasoning.
Thinking the Right Thoughts
In many learning and decision scenarios, especially sequential settings like mazes or games, it is easy to state an objective function but difficult to compute it, for instance because this can require enumerating many possible future trajectories. This, in turn, motivates a variety of more tractable approximations which then raise resource-rationality questions about whether and when an efficient agent should invest time or resources in computing decision variables more accurately. Previous work has used a simple all-or-nothing version of this reasoning as a framework to explain many phenomena of automaticity, habits, and compulsion in humans and animals. Here, I present a more finegrained theoretical analysis of deliberation, which attempts to address not just whether to deliberate vs. act, but which of many possible actions and trajectories to consider. Empirically, I first motivate and compare this account to nonlocal representations of spatial trajectories in the rodent place cell system, which are thought to be involved in planning. I also consider its implications, in humans, for variation over time and situations in subjective feelings of mental effort, boredom, and cognitive fatigue. Finally, I present results from a new study using magnetoencephalography in humans to measure subjective consideration of possible trajectories during a sequential learning task, and study its relationship to rational prioritization and to choice behavior.
One Instructional Sequence Fits all? A Conceptual Analysis of the Applicability of Concreteness Fading
According to the concreteness fading approach, instruction should start with concrete representations and progress stepwise to representations that are more idealized. Various researchers have suggested that concreteness fading is a broadly applicable instructional approach. In this talk, we conceptually analyze examples of concreteness fading in mathematics and various science domains. In this analysis, we draw on theories of analogical and relational reasoning and on the literature about learning with multiple representations. Furthermore, we report on an experimental study in which we employed concreteness fading in advanced physics education. The results of the conceptual analysis and the experimental study indicate that concreteness fading may not be as generalizable as has been suggested. The reasons for this limited generalizability are twofold. First, the types of representations and the relations between them differ across different domains. Second, the instructional goals between domains and the subsequent roles of the representations vary.
Cross Domain Generalisation in Humans and Machines
Recent advances in deep learning have produced models that far outstrip human performance in a number of domains. However, where machine learning approaches still fall far short of human-level performance is in the capacity to transfer knowledge across domains. While a human learner will happily apply knowledge acquired in one domain (e.g., mathematics) to a different domain (e.g., cooking; a vinaigrette is really just a ratio between edible fat and acid), machine learning models still struggle profoundly at such tasks. I will present a case that human intelligence might be (at least partially) usefully characterised by our ability to transfer knowledge widely, and a framework that we have developed for learning representations that support such transfer. The model is compared to current machine learning approaches.
Distinct patterns of default mode network activity differentially represent divergent thinking and mathematical reasoning.
Bernstein Conference 2024
Monkeys exhibit combinatorial reasoning during economic deliberation.
COSYNE 2022
Monkeys exhibit combinatorial reasoning during economic deliberation.
COSYNE 2022
Emergent compositional reasoning from recurrent neural dynamics
COSYNE 2023
Neural and behavioral evidence for hierarchical and counterfactual reasoning in non-human primates
COSYNE 2023
Equality reasoning in neural networks is modulated by learning richness
COSYNE 2025
A connectome-based fMRI study of spatial reasoning in stroke
FENS Forum 2024
The effect of pubertal development on spatial reasoning
FENS Forum 2024