While Apple is building more advanced hardware into its Augmented Reality devices, namely the iPad Pro lidar sensor, ARCore has been typically designed to work on the lowest common denominator in camera hardware. In the past, ARCore uses only one camera, even when most Android phones, including cheap ~$100 Android phones, come with multiple cameras that can help with 3D sensing. (Qualcomm's merits some blame here, as its SoCs have often only supported running a single camera at a time.)
In version 1.18, ARCore can use some of this extra camera hardware to aid in 3D sensing for the first time ever. While the Depth API may run in a single-camera mode that uses motion to determine depth values, it can also pull data from the time-of - flight sensor of a phone to improve the quality of the depth. Samsung was one of the companies called up in Note10 + and Galaxy S20 Ultra specifically to support this. Note that both of these devices are the highest-end skus. Tons of phones have multiple wide-angle and telephoto cameras, but many phones have ToF cameras.
Previously, ARCore would map walls and floors, and accordingly scale AR objects, but the Depth API allows things like occlusion - letting AR actors appear to be behind real world objects. The other great feature enabled by deep sensing is the ability to simulate physics, such as the ability to throw a virtual object down the real-life stairs and make it bounce realistically around.
For a guess about ARCore 's future, a good idea would be to look across the aisle to ARKit, Apple's Platform for Augmented Reality. A large depth feature in ARKit that does not appear to be mentioned in Google's blog post is "people occlusion," or the ability to hide virtual objects when moving objects. Only stationary objects hiding virtual objects appear in Google's demos.
The Depth API can be found in both Android and Unity SDKs.You will need an ARCore-compatible phone to use these features.