Neuro-inspired Human-Robot Interaction
General Information |
|
| Humans use different types of tools, or modalities, to communicate in their everyday lives, including spoken and written language, sign language, body gestures, facial expressions and also computational interfaces. For humans, the means to interact with each other is of vital importance in a social environment, but, communication does not occur exclusively amongst humans since animals and robots are also able to communicate using various modalities. The aim of the research in our group is to contribute to fundamental research in offering functional models for testing neuro-cognitive hypotheses about aspects of human communication, and in providing efficient bio-inspired methods to produce robust controllers for a communicative robot that successfully engages in human-robot interaction. |
|
| Leading Investigator: | Prof. Dr. S. Wermter, Dr. C. Weber |
| Associates: | S. Heinrich, D. Jirak, Dr. S. Magg |
|
|
|
Natural Language Learning on a Neural Cognitive Robot - Stefan Heinrich
![]() |
In the last decade the debate has been intensified if humans have either an amodel system for language and higher cognition or whether everything we think or talk about is grounded in the modalities of our sensory, sensori-motor or motor systems. The view that language is likely to be distributed over the brain is supported by recent neuro-scientific evidence based on fRMI and EEG measurements and valuable theoretical hypotheses based on psycho-linguistic long-term studies. Previous neuro-cognitive models have contributed in testing hypotheses about how functional webs for words or morphemes could occur and raised a common question: What are the functional dynamics of these distributed neural representations? |
| This research project aims to approach this general question from a computational perspective and to investigate: What are the functional dynamics in a plausible neural model, which also takes into account the temporal nature of morpheme or word sequences, namely sentences. The research investigate different recent continuous recurrent neural networks (CTRNNs) in processing sentences with respect to cross-modal communication, integrating visual context information. At a later stage of this project, it is intended to integrate the recurrent architecture on a robotic platform to evaluate the capabilities in learning semantic mappings of language and the context, and to generalise to unheard sentences in a teacher-learner scenario. | ![]() |
|
|
|
Vision-Based Gesture Recognition - Doreen Jirak
| Gestures accompany our communication everyday. Usually, gestures are performed unintentionally or, at least, unconscious. As an example, one might think of describing the way to the station or making gestures while talking to someone on the phone. Another motivation of gesture recognition addresses the deaf and hearing impaired people, which need the possibility to use a visual channel for both sending and receiving information. Having different classification schemes of gestures, my work is currently restricted to command gestures like stop or pointing. For evaluation, experiments are performed with the humanoid robot platform NAO. |
|
This project aims to separate gesticulation from gestures carrying meaning. Therefore, it is essential to define what a gesture is, how it is processed in the human brain and what are the differentiating criteria when considering a gesture set.
Treating gestures as a visual channel for communication, the basis of my work form vision-based approaches i.e. processing a temporal sequence of gestures captured by camera. Clarification of the biological underpinnings of gesture processing lead to
derivation of a mathematical framework, capable of reflecting the underlying neural processes. As humans are able to perceive and recognize even noisy or incomplete visual information, a plausible approach is statistical modelling with Conditional Random Fields. This type
of undirected graphical model captures soft-constraints on a temporal-spatial variational pattern, which of course constitute a gesture sequence. Further exploitation of statistical methods like Deep Belief Networks or neurally-motivated formalisms as ct-Recurrent Neural Networks
will finally form the basics for implementation of a robust architecture for application in human-robot-interaction and according scenarios.
|
|
|
|
Evolving Neural Agents - Sven Magg
| Neural networks are often used to control autonomous agents or form part of a controller in a hybrid approach. With growing complexity of such agents and their tasks, the corresponding controller inevitably has to become more complex as well. Instead of only increasing network sizes to allow for higher complexity, different mechanisms, which lead to more complex time dependent behaviour can be incorporated. | |
![]() |
Neuromodulation is one mechanism that can be found in mammalian brains and is involved in higher-level cognitive functions like decision-making, attention, and emotion. Understanding its influence on biological networks might lead to models, which can be used to improve the function of artifial neural network controllers. In complex networks with a large parameter space, defining the necessary topology and finding a working parameter set, becomes increasingly difficult. One way to solve this problem is artificial evolution, which uses biologically inspired mechanisms like selection, reproduction, and mutation. Each individual in this process is evaluated using a fitness criterion that defines how well a network solves the given problem. By repeatedly selecting promising candidates from a population and changing them though recombination and mutation, the evolutionary process constitutes a parallel search through the space of possible solutions. |
| In this project, we want to investigate computational models for neuromodulation in neural networks, to identify mechanisms, which can be efficiently used in robot controllers. In addition to this bottom-up approach, we also want to find working controllers through an evolutionary process that utilise neuromodulatory effects. By analysing successful controllers (network dynamics as well as their development through evolution), we hope to gain essential knowledge to improve models found in the first step. Since the speed in which successful controllers evolve depends on the combination of evolutionary operators, network features, and task class, analysis of the process will also enable us to identify efficient combinations of operators and network types for given tasks. The aim is to arrive at "optimal" combinations, to be able to create efficient robot controllers quickly for different problems. | ![]() |
|
|
|
Related Publications
|
|
|
Further Information |
|
|
The research in the Knowledge Technology Group is closely related with the teaching we offer. Periodically we offer lectures, seminars, practical courses, and projects concerning with:
|



