Projects
Cognitive Assistive Systems (CASY)
![]() |
The programme “Cognitive Assistive Systems (CASY)” will contribute to the focus on the next generation of human-centred systems for human-computer and human-robot collaboration. A central need of these systems is a high level of robustness and increased adaptivity to be able to act more natural under uncertain conditions. To address this need, research will focus on cognitively motivated multi-modal integration and human-robot interaction. Spokesperson CASY: Prof. Dr. S. Wermter Leading Investigator: Prof. Dr. S. Wermter, Prof. Dr. J. Zhang, Prof. Dr. C. Habel, Prof. Dr.-Ing. W. Menzel Details: CASY Project |
|
|
|
|
Neuro-inspired Human-Robot Interaction
![]() |
The aim of the research of the Knowledge Technology Group is to contribute to fundamental research in offering functional models for testing neuro-cognitive hypotheses about aspects of human communication, and in providing efficient bio-inspired methods to produce robust controllers for a communicative robot that successfully engages in human-robot interaction.
Leading Investigator: Prof. Dr. S. Wermter, Dr. C. Weber Associates: S. Heinrich, D. Jirak, S. Magg Details: HRI Project |
|
|
|
|
Robotics for Development of Cognition (RobotDoC)
![]() |
The RobotDoC Collegium is a multi-national doctoral training network for the interdisciplinary training on developmental cognitive robotics. The RobotDoc Fellows will develop advanced expertise of domain-specific robotics research skills and of complementary transferrable skills for careers in academia and industry. They will acquire hands-on experience through experiments with the open-source humanoid robot iCub, complemented by other existing robots available in the network's laboratories.
Leading Investigator: Prof. Dr. S. Wermter, Dr. C. Weber Associates: N. Navarro, J. Zhong Details: RobotDoC Project |
|
|
|
Knowledgeable SErvice Robots for Aging (KSERA)
![]() |
KSERA investigates the integration of assistive home technology
and service robotics to support older users in a domestic
environment. The KSERA system helps older people, especially
those with COPD (a lung disease), with daily activities and care needs and
provides the means for effective self-management. The main aim is to design a pleasant, easy-to-use and proactive socially assistive robot (SAR) that uses context information obtained from sensors in the older person's home to provide useful information and timely support at the right place. Leading Investigator: Prof. Dr. S. Wermter, Dr. C. Weber Associates: N. Meins, W. Yan Details: KSERA Project |
|
|
|
Cross-modal Interaction in Natural and Artificial Cognitive Systems (CINACS)
![]() |
CINACS is an international graduate colleg that investigates the principles of cross-modal interactions in natural and cognitive systems to implement them in artificial systems. Research will primarily consider the three sensory systems vision, hearing and haptics. This project, accomplished by the University of Hamburg and the University of Tsinghua, Beijing, is funded by the DFG and the Chinese Ministry of Education.
Spokesperson CINACS: Prof. Dr. J. Zhang Leading Investigator: Prof. Dr. S. Wermter, Dr. C. Weber Associates: J. Bauer, J. Kleesiek Details: CINACS Project |
|
|
|
What it Means to Communicate (NESTCOM)
![]() |
What does it mean to communicate?
Many projects have explored verbal and visual communication in humans as well as motor actions. They have explored a wide range of topics, including learning by imitation, the neural origins of languages, and the connections between verbal and non-verbal communication. The EU project NESTCOM is setting out to analyse these results to contribute to the understanding of the characteristics of human communication, focusing specifically on their relationship to computational neural networks and the role of mirror neurons in multimodal communications.
Leading Investigator: Prof. Dr. S. Wermter Associates: Dr. M. Knowles, M. Page Details: NESTCOM Project |
|
|
|
Midbrain Computational and Robotic Auditory Model for focused hearing (MiCRAM)
![]() |
This research is a collaborative interdisciplinary EPSRC project to be performed between the University of Newcastle, the University of Hamburg and the University of Sunderland. The overall aim is to study sound processing in the mammalian brain and to build a biomimetic robot to validate and test the neuroscience models for focused hearing. We collaboratively develop a biologically plausible computational model of auditory processing at the level of the inferior colliculus (IC). This approach will potentially clarify the roles of the multiple spectral and temporal representations that are present at the level of the IC and investigate how representations of sounds interact withauditory processing at that level to focus attention and select sound sources for robot models of focused hearing. Leading Investigator: Prof. Dr. S. Wermter, Dr. H. Erwin Associates: Dr. J. Liu, Dr. M. Elshaw Details: MiCRAM Project |
|
|
|
Biomimetic Multimodal Learning in a Mirror Neuron-based Robot (MirrorBot)
![]() |
This project develops and studies emerging embodied representation based on mirror neurons. New techniques including cell assemblies, associative neural networks, Hebbian-type learning associate visual, auditory and motor concepts. The basis of the research is an examination of the emergence of representations of actions, perceptions, conceptions, and language in a MirrorBot, a biologically inspired neural robot equipped with polymodal associative memory.
Details: MirrorBot Project |







