WWDC 2020 — ARKit silently moves to the whole new level!

Marcin Klimek
4 min readJun 25, 2020

--

After watching this year’s Keynote WWDC, you could have been unsatisfied with the news about the ARKit augmented reality platform. The novelties announced over the last years have shown enormous progress in the development of the platform and their “airtime” showing the high priority of the project. This year, ARKit on the stage … nothing was said. Only dedicated sessions with experts showed phenomenal platform updates that take AR experience to a whole new level. ARKit in my eyes has become the most advanced AR platform on the market.

Introducing ARKit 4 presentation

Location Anchors
The first extremely important element presented at the session on ARKit 4 is AR Location Anchors. As you can see from Apple joins the race in the construction of solutions in the area of ​​AR Cloud. What’s more, they have all the tools to achieve spectacular success.

AR Location Anchors map

Location Anchors combines the already developed capabilities of detection and tracking of so-called point clouds built on the basis of the camera image with geolocation data. However, this is not just simple GPS information. In its solution, Apple uses dedicated data collected from satellite maps created as part of its platform and advanced Machine Learning algorithms. Launching a project aimed at building a map engine and direct competition in this field with Google in 2012 was a shot at 10. The presented demos show very high accuracy and stability of presenting 3D models in the real environment of buildings. Let’s hope that in practice the engine will work just as well.

Occlusion of people and objects
Last year together with ARKit 3 Apple presented Body Occlusion. A solution based on the segmentation of the human body, which allows virtual objects to be sent by people in front of them. Thus, Apple has provided a solution to a much more complex problem than Google, which for Android has provided the occlusion of static objects.

ARKit 4 Occlusion

This year, Apple has supplemented the occlusion mechanism with the above-mentioned versions associated with static objects. What’s more, it achieved much better results. The key here was the LiDAR presented with the iPad. Demos presenting occlusion effect look great, they are very accurate and responsive. Once again, the holistic approaches of the Cupertino company, taking into account both software and hardware, gave great results.

Dynamic Lighting and physics
Thanks to the accurate physical space model from LiDAR, Apple could also provide updates on dynamic lighting and physics. In previous versions of the engine, shadows could be generated but it required pinning the object to a specific plane — now it is no longer necessary. The shadows cast by the model on the plane below it are generated automatically and projected onto the surfaces detected using the depth map. the same applies to physics. We no longer need to artificially model and simulate the surrounding space. Three-dimensional models on the stage naturally fall or bounce off real objects.

Video textures
Static textures available so far have been supplemented with animated video versions. Thanks to this, 3D models used in-built AR experiments to come to life. We can apply the new mechanism in many ways. In the WWDC video, a model was presented that, using video texture, obtained interesting flares on its surface. We can also imagine a device model whose display will show changing content. It can also be interesting to use a dynamically generated video image. The UV mapping process itself is the same as for static textures.

ARKit video texture

Face detection and analysis from an RGB camera
The enigmatic extension of facial tracking support to all devices with an A12 processor and above also has far more important implications. Apple not only introduces dedicated sensors such as TrueDepth or LiDAR to its devices but also develops internal algorithms that analyze data from a regular camera. The extension of support in practice means that Apple has made available the effect of its algorithm for the reconstruction of the 3D face model and its facial expressions based on data from the RGB camera.

ARKit 4 and A12 face detection

Wrap up
Apple has been criticized for many years for stable development and the lack of significant innovation. Stable and coherent development of the platform works very well in my opinion. Further improvements at both the hardware and software levels over the years mean that we get a well-planned, well-thought-out, and, above all, well-rounded product that just works. The quality and usability of the solutions described above, as well as the fact that there was no information about them on the Keynote WWDC 2020 stage, confirms my belief that soon we will be waiting for more interesting offers from Apple.

--

--

Marcin Klimek
Marcin Klimek

Written by Marcin Klimek

Early adopter/evangelist of emerging technologies focused on Augmented and Virtual Reality. Actively shape the #FutureOfWork as CEO of ExplodedView.io.

No responses yet