- Advertisement -

Google Reveals What’s Behind Their AR Animations

607

Ever wondered how those masks, hats, and sunglasses used in the YouTube stories look so damn real? Ever wanted to discover the concealed magic behind that skillful presentation? If yes, then you have just landed on the right page because we are going to explain the mystery of AR tricks which has been revealed by Google AI division.

In Google’s recent blog post, the engineers at Mountain View Company elaborate how they can simulate light reflections, model face occlusions, model specular reflection, and more — all in real time with a single camera.

According to the Google AI research engineers Artsiom Ablavatski and Ivan Grishchenko, the critical aspect of creating these AR videos includes the proper anchoring of the virtual content into the real world scenario. The creation process involves an intricate set of technologies that makes it possible to track the surface geometry of every smile, smirk or frown.

The scientists further explained that Machine Learning is used to infer the three-dimensional geometry of the real world and for the application of visual effects.

The Selfie Mode

Coming to the most common selfie mode of today’s world, Google uses the ML pipeline that comprises of dual real-time deep neural network models that work in a collaboration, which includes a detector that works on the full picture and evaluates the facial features. The second model is a 3D mesh model that works on the facial locations and aids in the prediction of the surface geometry through regression. Once the face is mapped correctly, the network is applied for a single time frame applying windowed smoothing to minimize noise and time lag during significant movement.

Google also puts into use the TensorFlow Lite for on-device inference and a newly introduced GPU back-end acceleration that enhanced the overall action.

What’s the end product?

The final result of all these hard work done by Google enables the user to capture realistic selfies and photographs with convincing AR effects. All this is achieved by simulating light reflections via environmental mapping for realistic rendering of glasses, natural lighting in the mesh and through modeling of occlusions on the face to camouflage virtual parts behind a face.

Source Google Blog

Get real time updates directly on you device, subscribe now.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More