Reactive Reality moves augmented reality from the lab into your hands. All you need is a smartphone or tablet. Take pictures or download images from the web and you are ready. Experience products before you buy them. Visit places before you go there. Create your own world.

Just like a social network, augmented reality needs exciting content to become something we use every day. Our technology enables smartphone users to share the things they love through augmented reality.


Image-based Augmentation

Reactive Reality uses an advanced algorithmic solution that we call Image-Based Augmentation (IBA). It enables users to create and share augmented reality (AR) content on mobile platforms such as smartphones and wearable devices and does not require specialized skills or hardware. The result of more than a decade of advanced research, we achieved IBA by merging image-based rendering methods with general augmented reality visualization. The result? Try on clothes before you buy them, experience a car online or walk through a house before it's built. Teleport people into your augmented reality and see them on your mobile device. There is no limit what IBA can do when you carry your augmented reality in your pocket.


Body Pose And Shape Detection On Mobile Devices

In our mission to roll out augmented reality to a wide range of users, we developed a specialized set of shape fitting algorithms for mobile platforms. Our methods cut down on the number of memory operations while using parallel instruction sets. We achieve interactive computation times for articulated, deformable model detection. These methods can be used to recognize a user's body position, her body shape and even her clothes. 


Stream Processing And Image Registration Methods

High fidelity augmented reality systems have to process vast data streams with very little latency. In our research on spatio-temporal upsampling of multi-sensor setups and multi-frame rate rendering, we developed stream processing methods that are particularly well suited for parallel execution. Our implementation of image warping, view-dependent texture mapping and non-rigid image registration run interactively even on mobile devices.


Virtual Avatar Reconstruction For Mixed Reality

Virtual avatars are common to a large range of applications. For example, telepresence systems, virtual try-on applications or games. Avatars need to look like the users they represent to create a sense of immersion. Our technology enables users to create realistic avatars that look like themselves on their home PC, gaming console or smartphone. Our technology is based on a highly efficient solver for non-linear equations that automatically adapts a parametric avatar model to given color and depth data.


Sensor Fusion and Multi-View Geometry

Over the years, we built several multi-camera systems. We used projection-based and time-of-flight depth sensors in combination with color cameras to fuse 4D data sets of dynamic scenes. Our research was particularly focused on high performance processing of image sequences to drive a real-time 3D display. Our  methods enable applications to create full 3D videos of people for telepresence and entertainment applications. Imagine a live magic mirror that can show you from all sides - immersed in a virtual world!