Reactive Reality’s technology drives a wide range of Augmented Reality (AR) applications by enabling quick content generation methods and user immersion. Our Artificial Intelligence (AI) methods capture users and objects and reproduce their appearance through image-based modeling. This approach is inherently scalable (every user has a camera) and realistic (photos and videos don’t look artifical, virtual objects often do). All our methods run on conventional mobile devices without any special sensors and was designed for instant zero latency operation.

  • Icon Mobile


  • Icon Realistic


  • Icon Instant


  • Icon Scalable


Image Based Rendering

User Immersion Through Image Based Rendering

Reactive Reality’s technology enables users to immerse themselves into augmented reality scenes with apparel, objects and landmarks. For example, users can see themselves wearing a summer dress while standing on the beach or in the streets of New York. With a single swipe, a user is taken to a fashion show where she is posing as one of the models on a catwalk, wearing her favorite designer's clothes. Tilting the user's phone reveals new views of the scene and creates a strong sense of depth and immersion.

The technology’s application areas are nearly limitless. Proprietary image-based modeling algorithms turn conventional images and photos into AR objects that can be viewed, tried on, animated and interacted with. Users just take a photo or capture a video to immerse themselves into other worlds, and can even create their own worlds on a smartphones.

Related Publications

  • Stefan Hauswiesner, Philipp Grasmug: Method and system for producing output images and method for generating image-related databases. Filed at EPO May 2015 Link
  • Stefan Hauswiesner, Efficient Image-based Augmentations, PhD Thesis, Graz University of Technology, 2013 PDF
  • Stefan Hauswiesner, Philipp Grasmug, Denis Kalkofen, Dieter Schmalstieg: Frame Cache Management for Multi-frame Rate Systems. Proceedings of the 8th International Symposium on Visual Computing (ISVC), 2012 PDF
Image Based Modeling and Machine Learning

Image-based Modeling Through Machine Learning

Reactive Reality’s technology can transform conventional images into augmented reality objects. Content can be product images from webshops or photographs the user takes. Our methods use machine learning to identify the type of image and shape fitting to obtain a geometric model. The secret to this approach lies in how to obtain shape templates for fitting. We developed a backend application that enables us to add new object categories within minutes which, in turn, enables users to add as many objects into categories as they’d like. This allows for unprecedented scalability of the content generation.

Related Publications

  • Thomas Richter-Trummer, Jinwoo Park, Denis Kalkofen, and Dieter Schmalstieg. Instant Mixed Reality Lighting from Casual Scanning. In Proc. IEEE International Symposium on Mixed and Augmented Reality (ISMAR'16), Merida, Mexico, 2016 PDF
  • Stefan Hauswiesner, Philipp Grasmug: Method and system for generating garment model data. Filed at EPO December 2014 Link
  • Michael Donoser and Dieter Schmalstieg: Discrete-Continuous Gradient Orientation Estimation for Faster Image Segmentation. In Proc. IEEE Computer Vision and Pattern Recognition 2014, Columbus, OH, USA, 2014 PDF
  • Stefan Hauswiesner, Matthias Straka, Gerhard Reitmayr: Virtual Try-On Through Image-based Rendering. IEEE Transactions on Visualization and Computer Graphics (TVCG), 2013 PDF
  • Stefan Hauswiesner, Matthias Straka, Gerhard Reitmayr: Image-Based Clothes Transfer. Proceedings of the IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Basel, Switzerland, 2011 PDF
Efficient Mesh Fitting

Adaption Through Advanced Mesh Fitting

The adaptation and augmentation of clothes on a user’s image is achieved through advanced mesh fitting algorithms. Our proprietary algorithms use differential coordinates to represent garments and other non-rigid objects, allowing us to model fit, stretch and stiffness as a linear system of equations. The mesh tessellation and correspondence search algorithms are specifically optimized for mobile CPUs. As a result, our solver converges within a fraction of a second on any recent smartphone or tablet model. Our proprietary rendering methods utilize mobile graphical processors to achieve a high level of performance and keep battery usage low.

Related Publications

  • Michael Donoser, Martin Hirzer, Dieter Schmalstieg: Multiple Model Fitting by Evolutionary Dynamics. International Conference on Pattern Recognition, ICPR 2014, Stockholm, Sweden, 2014 PDF
  • Markus Steinberger, Michael Kenzel, Bernhard Kainz, Joerg Mueller, Peter Wonka, Dieter Schmalstieg: Parallel Generation of Architecture on the GPU. Computer Graphics Forum, 33(2), 2014 PDF
  • Stefan Hauswiesner, Matthias Straka, Gerhard Reitmayr: Temporal Coherence in Image-based Visual Hull Rendering. IEEE Transactions on Visualization and Computer Graphics (TVCG), 2013 PDF
  • Markus Steinberger, Bernhard Kainz, Bernhard Kerbl, Stefan Hauswiesner, Michael Kenzel, Dieter Schmalstieg: Softshell: Dynamic Scheduling on GPUs. ACM Transactions on Graphics, Proceedings of SIGGRAPH Asia 2012 PDF
  • Stefan Hauswiesner, Rostislav Khlebnikov, Markus Steinberger, Matthias Straka, Gerhard Reitmayr: Multi-GPU Image-based Visual Hull Rendering. Proceedings of the Eurographics Symposium on Parallel Graphics and Visualization, Sardinia, Italy, 2012 PDF
Body Pose and Shape Detection on Mobile Devices

Body Pose and Shape Detection Through Deformable Model Detection

To accelerate the adoption of augmented reality on mobile platforms, we developed a specialized set of algorithms algorithms that make the integration of a user’s selfie very easy – a snap with her mobile device is all that’s needed. It’s a very challenging technical problem that Reactive Reality solves through articulated, deformable model detection. We cut down on the number of memory operations by using SIMD and GPU parallel instruction sets. We use a pipeline of different methods, including SVM-supported analysis of image features and probabilistic pose space modeling. These methods can be used to recognize a user's body position and shape, but can also be used to locate vehicles, machines, furniture and other articulated or rigid objects in images.

Related Publications

  • Thomas Krinninger, One-Shot 3D Body-Measurement, Master's Thesis, Graz University of Technology, 2016 PDF
  • Matthias Straka, Stefan Hauswiesner, Matthias Rüther, Horst Bischof: Rapid Skin: Estimating the 3D Human Pose and Shape in Real-Time. Proceedings of 3DimPVT, Zürich, Switzerland, 2012 PDF
  • Matthias Straka, Stefan Hauswiesner, Matthias Rüther, Horst Bischof: Simultaneous Shape and Pose Adaption of Articulated Models using Linear Optimization. Proceedings of the 12th European Conference on Computer Vision (ECCV), 2012 PDF
  • Matthias Straka, Stefan Hauswiesner, Matthias Rüther, Horst Bischof: Skeletal Graph Based Human Pose Estimation in Real-Time, Proceedings of the British Machine Vision Conference (BMVC), 2011 PDF
  • Andreas Hartl, Lukas Gruber, Clemens Arth, Stefan Hauswiesner, Dieter Schmalstieg: Rapid Reconstruction of Small Objects on Mobile Phones. Proceedings of the Embedded Computer Vision Workshop (held in conjunction with CVPR), 2011 PDF
  • Matthias Straka, Stefan Hauswiesner, Matthias Rüther, Horst Bischof: A Free-Viewpoint Virtual Mirror with Marker-Less User Interaction. Proceedings of the 17th Scandinavian Conference on Image Analysis (SCIA), 2011 PDF
3D Reconstruction for Mixed Reality

Mixed Reality Through Avatar Model Adaption

Virtual avatars are common in a wide range of applications such as telepresence systems, virtual try-on applications or games. Avatars need to look like the users they represent to create a sense of immersion. Reactive Reality's technology enables users to create realistic avatars of themselves on their home PC, gaming console or smartphone. Our technology is based on a highly efficient solver for non-linear equations that automatically adapts a parametric avatar model to given color and depth data.

Related Publications

  • Christoph Bauernhofer, Dense Reconstruction On Mobile Devices, Master's Thesis, Graz University of Technology, 2016 PDF
  • Bernhard Kainz, Stefan Hauswiesner, Gerhard Reitmayr, Markus Steinberger, Raphael Grasset, Lukas Gruber, Eduardo Veas, Denis Kalkofen, Hartmut Seichter, Dieter Schmalstieg: OmniKinect: Real-Time Dense Volumetric Data Acquisition and Applications. Symposium on Virtual Reality Software and Technology (VRST), 2012 PDF
  • Stefan Hauswiesner, Matthias Straka, Gerhard Reitmayr: Free Viewpoint Virtual Try-On With Commodity Depth Cameras. Proceedings of the 10th International Conference on Virtual Reality Continuum and Its Applications in Industry (VRCAI), ACM SIGGRAPH, Hong Kong, 2011 PDF
  • Stefan Hauswiesner, Matthias Straka, Gerhard Reitmayr: Coherent Image-Based Rendering of Real-World Objects. Proceedings of the ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games, San Francisco, CA, 2011 PDF