Augmented reality is the science of merging virtual objects with the real world. Usually, these visualizations are created in a lab by experts.

We believe it is time for users to generate their own augmented realities on the go, because that's where exciting things happen.

Our methods enable users to create 3D geometry and image-based representations on their mobile devices. 

Body Pose And Shape Detection On Mobile Devices

In our mission to roll out augmented reality to a wide range of users, we developed a specialized set of shape fitting algorithms for mobile platforms. Our methods cut down on the number of memory operations while using parallel instruction sets. We achieve interactive computation times for articulated, deformable model detection. These methods can be used to recognize a user's body position, her body shape and even her clothes. 

360° Product Visualization

Our 360° product viewer helps customers of webshops or online market places to get a better impression of what they will get. Products can be watched from all sides and sizes or configurations can be compared to each other. Our image processing technology helps the user to capture these images and turn them into appealing visualizations. For example, use your phone to capture 360° views of your car before putting it on Craigslist.

Stream Processing And Image Registration Methods

High fidelity augmented reality systems have to process vast data streams with very little latency. In our research on spatio-temporal upsampling of multi-sensor setups and multi-frame rate rendering, we developed stream processing methods that are particularly well suited for parallel execution. Our implementation of image warping, view-dependent texture mapping and non-rigid image registration run interactively even on mobile devices.

Virtual Avatar Reconstruction For Mixed Reality

Virtual avatars are common to a large range of applications. For example, telepresence systems, virtual try-on applications or games. Avatars need to look like the users they represent to create a sense of immersion. Our technology enables users to create realistic avatars that look like themselves on their home PC, gaming console or smartphone. Our technology is based on a highly efficient solver for non-linear equations that automatically adapts a parametric avatar model to given color and depth data.

Sensor Fusion and Multi-View Geometry

Over the years, we built several multi-camera systems. We used projection-based and time-of-flight depth sensors in combination with color cameras to fuse 4D data sets of dynamic scenes. Our research was particularly focused on high performance processing of image sequences to drive a real-time 3D display. Our  methods enable applications to create full 3D videos of people for telepresence and entertainment applications. Imagine a live magic mirror that can show you from all sides - immersed in a virtual world!