Augmented reality is the science of merging virtual objects with the real world. Usually, these visualizations are created in a lab by experts.

We believe it is time for users to generate their own augmented realities on the go, because that's where exciting things happen.

Our methods enable users to create 3D geometry and image-based representations on their mobile devices. 


Image-based Augmentation

Through many years of research, Reactive Reality successfully merged image-based rendering methods with general augmented reality visualizations. We call this unique technology Image-based Augmentation. It is at the core of our products and enables users to create and share augmented reality (AR) content without special skills or hardware. As a result, AR is no longer bound to B2B or lab scenarios and can live up to its true potential. Smartphones and display glasses will become windows into an alternate reality where everything is possible. Move furniture effortlessly, or walk through your house before it is even built. Try on clothes or inspect a car before buying them online. Integrate remotely located people into your daily life as if they were present.


Body Pose And Shape Detection On Mobile Devices

In our mission to roll out augmented reality to a wide range of users, we developed a specialized set of shape fitting algorithms for mobile platforms. Our methods cut down on the number of memory operations while using parallel instruction sets. We achieve interactive computation times for articulated, deformable model detection. These methods can be used to recognize a user's body position, her body shape and even her clothes. 


Stream Processing And Image Registration Methods

High fidelity augmented reality systems have to process vast data streams with very little latency. In our research on spatio-temporal upsampling of multi-sensor setups and multi-frame rate rendering, we developed stream processing methods that are particularly well suited for parallel execution. Our implementation of image warping, view-dependent texture mapping and non-rigid image registration run interactively even on mobile devices.


Virtual Avatar Reconstruction For Mixed Reality

Virtual avatars are common to a large range of applications. For example, telepresence systems, virtual try-on applications or games. Avatars need to look like the users they represent to create a sense of immersion. Our technology enables users to create realistic avatars that look like themselves on their home PC, gaming console or smartphone. Our technology is based on a highly efficient solver for non-linear equations that automatically adapts a parametric avatar model to given color and depth data.


Sensor Fusion and Multi-View Geometry

Over the years, we built several multi-camera systems. We used projection-based and time-of-flight depth sensors in combination with color cameras to fuse 4D data sets of dynamic scenes. Our research was particularly focused on high performance processing of image sequences to drive a real-time 3D display. Our  methods enable applications to create full 3D videos of people for telepresence and entertainment applications. Imagine a live magic mirror that can show you from all sides - immersed in a virtual world!