Our research vision is to create Assistive Augmentations: Designing intelligent human-computer interfaces that extend the limits of our perceptual and cognitive capabilities.
Our senses define the way we perceive the world and interact with the environment around us. While permanent or situational impairments often limit our capability to accomplish even simple tasks, a few people have developed extraordinary ways to overcome their limitations by making use of their senses in unconventional ways. Prominent examples include Evelyn Glennie – a world-renowned solo percussionist who is profoundly deaf but able to perceive music through her skin; and Ben Underwood – who could hear his surrounding environment and move independently using self-taught echolocation after having his eyes removed when he was a toddler.
Assistive Augmentations aim to enhance our perceptual and cognitive capabilities and provide solutions to overcome impairments to anyone with the help of technology. We believe carefully designed Assistive Augmentations can empower people constrained by impairments to live more independently again and even extend one's perceptual and cognitive capabilities beyond the ordinary.
Working towards this vision, we developed technologies like Ai-See, that allows blind users to access information simply by pointing at objects and asking questions; Prospero a memory training assistant that is able to detect when a user can learn more efficiently; OM, that allows deaf users to 'feel' music; GymSoles, a smart insole that enables users to perform exercises with the correct body posture; and Kiwrious, that allows users to see the invisible physical phenomena around them.
We aim to design novel assistive augmentations that leverage state-of-the-art technology and to develop innovative concepts and tools that pave the way for the next generation of assistive augmentations.
Our current focus includes supporting people with visual or hearing impairments, children with Autism Spectrum Disorder, senior citizens and school children.
Our Research Areas
Wearable interfaces, on-body feedback and electronic textiles
Human-Centered Machine Learning
Applications of Self-Supervised Learning techniques to understand physiological, verbal and visual information
New input and interaction concepts
Embedded electronics, rapid prototyping & Toolkits