Google Search AR Starts Rolling Out Depth-Based Object Blending/Occlusion

In Dec, 2019 Google AR & VR announced that ARCore phones might detect depth using just a single lens.

The ARCore Depth API makes use of a so-called depth-from-motion algorithm. With the help of this algorithm, you only need a single RGB camera to create a depth map. To do this, it uses multiple images from several different angles and it then compares those images with each other while you move your phone. The result: it can now create a better depth map because it has a better way to estimate distances to the individual spots in those images.


As a result, ARCore is now able to place objects accurately in front of or behind real-world objects. The object that is placed will now not be hovering anymore or halfway placed into other objects. Object occlusion has recently been introduced into Google Search AR.

In AR “having a 3D understanding of the world,” experiences can become far more realistic, immersive, and fewer reality-breaking.