ARCore's New Depth API Will Bring The Next Generation Of Hide-and-Seek To Phones

Historically, AR applications on Android have difficulties in sensing depth. The other problem also includes distinguishing between foreground and background in the real world. When you add an AR object, instead of blocking the view in real it just sits on the entire picture of the sight. Google introduced ARCore’s depth API to overcome this. To distinguish between real world foreground and background, this new AR interface is used.

Google also added some changes to its own products using this. In order to create a depth-map, the API uses an algorithm that is similar to Google Camera’s depth Portrait Mode. The algorithm helps to calculate the distance to each pixel on the monitor. This can be done by taking multiple images from various angles while moving the camera. Now the API can count on only a single camera.

The depth information helps to hide or partially hide digital objects behind real-world materials. You can view any animal and more directly on your camera with the help of Scene Viewer. It is the part of Google search.  Also it is the first Google product that is equipped with the API.

For improved path finding, proper surface interactions as well as better physics, depth information is used. The advantage of depth informationis that the object collides through each other. Because phones are on average getting more cameras than before, depth perception will definitely continue to improve. This is undoubtedly going to lead to interesting progress in the future.