What's New in ARKit 3?
Changes from ARKit 2 to ARKit 3
One of the items we considered as a weakness of ARKit 2 was the impossibility of identifying whether the camera’s vision is hindered when spawning virtual objects. regardless of how real the virtual object could appear as if, it might just remain visible whether or not there’s an obstacle between the article and also the camera’s field of view, or if someone walked before of it. This was something that actually broke the standard of the ilusion, and a part of the ugly side of ARKit 2.
Well, this might not count as an unpleasant side of ARKit 3 though. Now People Occlusion entered the sport and allows AR objects to be before or behind people in a very more realistic way.
ARKit 2 already had some hardware requirements so as to form use of its features, so it’s not a surprise that the identical would happen for ARKit 3. In fact, its latest features require devices a minimum of with an A12 processor, whereas for ARKit 2 the minimum was the A9.
This means that, as an example, iPhone models previous to the iPhone XS, or simply less powerful don't seem to be ready to leverage ARKit 3 new capabilities.
RealityKit & Reality Composer
In the past, there was a steep learning curve for creating realistic AR experiences. This difficulty is, somehow, reduced thanks to the new RealityKit that comes with ARKit 3.
Prior to RealityKit, there was SceneKit that allowed you to create apps with AR interactions normally, like placing virtual objects on the walls or the ground. We used this framework along after we tested ARKit 2, so as to figure with 3D objects, and that we had to dive into heavy concepts of 3D rendering.
ARKit 3 New Features
Above we mentioned ARKit 3’s awareness of individuals with regards to People Occlusion, possibly possible thanks to a Machine Learning model, but this new capability doesn’t stop there. Awareness of individuals also allows you to trace human movement and use it as input for an AR app.
For instance, with Motion Capture you'll be able to replicate a person’s movement in a very virtual representation, sort of a skeleton
Guided Plane Detection & Improved Accuracy
Something that wasn't available in ARKit 2 but would have helped lots is support for a built-in guide for recognizing planes within the planet. just in case you were looking to follow this process you had to put in writing some code and figure it out on your own.
Now, ARKit 3 brings a straightforward assistant that allows you to know the way the plane detection goes and which one is detected, before placing any virtual object within the environment.
Simultaneous Front and Back Camera & Collaborative Sessions
Using both front and back cameras at the identical time opens a brand new range of possibilities for AR experiences, like interacting with virtual objects by facial expressions. it absolutely was not available before thanks to the ability consumption such feature would require, but it's now viable because of a brand new API Apple released after WWDC 2019.
With collaborative sessions, now you'll be able to dive into a shared AR world with people, especially useful for multiplayer games, as you'll be able to see during this Minecraft demo shown on WWDC19.
Even though we weren't ready to test these features out, we consider them to be of high interest which could push AR immersion even further.