Research

Overview

(Prepared November 2018) Overview video of selected prior research and student projects.

Interactive Medical Simulation

Surgical training systems The development of simulations for surgical training, in close collaboration with clinical partners, has been a focus of our group for almost two decades. A key target of the work is to achieve a high level of realism. We strive to go beyond rehearsal of basic manipulative skills, and to enable the training of procedural skills like decision making and problem solving. Furthermore, the integration of the simulation systems into the medical curriculum is tackled. The research activities of the group have also led to the foundation of the spin-off company VirtaMed.

Real-time mesh cutting A central training objective of virtual reality based surgical simulation is the removal of pathologic tissue. This necessitates stable, real-time updates of the underlying mesh representation. We have developed various approaches for cutting into tetrahedral or triangular meshes tailored to specific surgical training system.

Virtual and Augmented Reality

Colocated visuo-haptic augmentation A key target of recent work has been the extension of the paradigm of augmented reality towards multimodal interaction. Our group has focused on the integration of haptic feedback into visual augmented reality environments. We provided several solutions to allow natural interaction, at the same time both with real and virtual objects. In this context we have developed methods for the precise calibration and integration of haptic interfaces in multimodal AR environments, techniques to improve accuracy and stability of the visual overlay, as well as enhanced visualization methods to improve depth perception.

Haptic augmentation Further, in recent research we also attempt to go beyond mere visual augmentation by providing actual haptic augmentation, i.e. the combination of forces from real and virtual objects. Visuo-haptic augmented reality opens up completely new possibilities of interaction and media presentation, e.g. for education, communication, and collaboration

Computer Haptics

Data-driven haptic rendering Akin to image-based rendering in computer graphics, we have developed the concept of data-driven haptic rendering. The underlying idea is to acquire interaction data during manipulation of objects, which are subsequently used for data-driven virtual rendering. We have developed a special recording device as well as methods for the integration of different sensor signals for the display. Radial basis functions are used for interpolation of haptic signals acquired from fluids and solids. We also proposed computationally efficient techniques to accelerate the interpolation process. This research domain is a key target of our current work. We strive to build a general framework for multimodal data-driven acquisition and rendering, which allows to visually as well as haptically capture objects during unconstrained interaction for subsequent display.

Human Computer Interaction in Medicine

A further direction of the scientific work in our group is the development of new algorithms and systems for medical diagnosis and planning. The leitmotif of this activity is the optimal cooperation between interactive algorithms and the human operator.

Multi-modal data segmentation Extensive research has been invested in recent years into improving interactive segmentation algorithms. However, the human computer interface, a substantial part of an interactive setup, is usually not investigated. The aim of this work is the optimal cooperation between interactive image analysis algorithms and human operators. A visuo-haptic interaction tool for medical segmentation has been designed, which opens the way to virtual endoscopy of the small intestine.

Enhanced surgical planning A further direction is the support of surgical planning procedures with enhanced interfaces. For instance, in joint projects with the University Hospital Zurich and the Balgrist University Hospital, several systems were developed for planning of surgical interventions, on complex fractures of the hip and the shoulder joint, as well as for forearm surgery.

Typing in VR

Entering text is one of the most common tasks when interacting with computing systems. Virtual Reality (VR) presents a challenge as neither the user's hands nor the physical input devices are directly visible. Hence, conventional desktop peripherals are very slow, imprecise, and cumbersome. We developed a apparatus that tracks the user's hands, and a physical keyboard, and visualize them in VR. In a text input study with 32 participants, we investigated the achievable text entry speed and the effect of hand representations and transparency on typing performance, workload, and presence. With our apparatus, experienced typists benefited from seeing their hands, and reach almost outside-VR performance. Inexperienced typists profited from semi-transparent hands, which enabled them to type just 5.6 WPM slower than with a regular desktop setup. We conclude that optimizing the visualization of hands in VR is important, especially for inexperienced typists, to enable a high typing performance.

Nach oben scrollen