ARKit

What is an augmented reality?

Devices augment reality by placing virtual objects in your physical world, typically by using your phone’s camera. You can interact with those objects by looking at your device’s display.

What is ARKit, exactly?

Augmented reality (AR) describes user experiences that add 2D or 3D elements to the live view from a device’s camera in a way that makes those elements appear to inhabit the real world. ARKit combines device motion tracking, camera scene capture, advanced scene processing, and display conveniences to simplify the task of building an AR experience. You can use these technologies to create many kinds of AR experiences using either the back camera or front camera of an iOS device.

ARKit combines device motion tracking, camera scene capture, advanced scene processing, and display conveniences to simplify the task of building an AR experience. You can use these technologies to create many kinds of AR experiences using either the back camera or front camera of an iOS device.

How does it work?

ARKit uses the iPhone’s cameras to map out an environment. A key part is recognizing where the walls and floor are, establishing the basic geometry of the space.

It’s the Apple alternative to Google Project Tango but puts fewer demands on the hardware. Project Tango requires a special camera array with dual cameras and an IR transmitter. This lets it make a rough 3D model of a room on the fly, and track motion with almost unnerving accuracy.

ARKit doesn’t require this. Instead, it recognizes planes, like your floor. Then it uses the phone’s camera and its motion detectors to track movement as the iPhone is moved and tilted.

Motion co-processors like the iPhone 7 Plus’s M10 can efficiently track data from the gyroscope, accelerometer, and magnetometer. This lets it monitor movement in a 3D space without excessive hardware demands. Google has actually announced something similar too, called ARCore. It’s Project Tango for phones without the flashy camera tech
The core of ARKit is about letting you drop objects into an environment and manipulate them using the touchscreen. Other than recognizing real-world objects, the main aim is to keep track of objects as the iPhone is moved. As soon as it loses track of this, the illusion is shattered.

One of the most obvious uses for this is interior design. You could place a chair in the corner of the room, and see how it looks as you turn the phone for a different view. It is like a more advanced version of some online glasses retailers’ virtual try-on feature.

However, Apple has also shown off how the functionality is being used in education, with videos showing children getting more information about a piece of art hanging on the wall in front of you, or being taught about the environment by projecting a landscape on a table. We tend to think of far-fetched, futuristic real-world uses when we start thinking about AR and VR. But some of the best applications of ARKit are actually fairly mundane. Apple initially launched ARKit with iOS 11 in 2017. App developers could use Xcode, Apple’s software-development environment on Macs, to build apps with it. In the first release, that meant horizontal planes and a few other things. ARKit primarily does three essential things behind the scenes in AR apps: tracking, scene understanding, and rendering.

Tracking keeps tabs on a device’s position and orientation in the physical world, and it can track objects like posters and faces—though some of those trackable items were not supported in the initial iOS 11 release. Scene understanding essentially scans the environment and provides information about it to the developer, the app, or the user. In the first release, that meant horizontal planes and a few other things. Rendering means that ARKit handles most of the work for placing 3D objects contextually in the scene captured by the device’s camera, like putting a virtual table in the middle of the user’s dining room while they’re using a furniture shopping app. ARKit does this by tracking the environment in some specific ways. Let’s review what the initial release supported on that front.

Orientation tracking :

In the orientation tracking configuration, ARKit uses the device’s internal sensors to track rotation in three degrees of freedom, but it’s like turning your head without walking anywhere—changes in physical position aren’t tracked here, just orientation in a spherical virtual environment with the device at the origin. Orientation tracking is an especially useful approach for augmenting far-off objects and places outside the device’s immediate vicinity.

World Tracking :

There’s more to world tracking. It tracks the device’s camera viewing orientation and any changes in the device’s physical location. So unlike orientation tracking, it understands if the device has moved two feet to the right. It also does this without any prior information about the environment. Further, ARKit uses a process called visual inertial odometry, which involves identifying key physical features in the environment around the device. Those features are recorded from multiple angles as the device is moved and reoriented in physical space.

The images captured in this process are used together to understand depth; it’s similar to humans perceive depth from two eyes. This generates what it calls a world map, which can be used to position and orient objects, apply lighting and shadows to them, and much more. The more a user moves and reorients, the more information is tracked, and the more accurate and realistic the augmentations can become. When ARKit builds the world map, it matches it to a virtual coordinate space in which objects can be placed. Movement in the scene can also trip the process up.

The device needs uninterrupted sensor data, and this process works best in well-lit environments that are textured and that contain very distinct features; pointing the camera at a blank wall won’t help much. Too much movement in the scene can also trip the process up.ARKit tracks world map quality under the hood, and it indicates one of three states that developers are advised to report in turn to users in some way:

  • Not available: The world map is not yet built.
  • Limited: Some factor has prevented an adequate world map from being built, so functionality and accuracy may be limited.
  • Normal: The world map is robust enough that good augmentation can be expected.

Plane detection :

Plane detection uses the world map to detect surfaces on which augmented reality objects can be placed. When ARKit launched with iOS 11, only horizontal planes were detected and usable, and variations like bumps and curves could easily disturb efforts to accurately place 3D objects in the view of the camera image on the device’s screen.

Features added in iOS 11.3 :

Apple released ARKit 1.5 with iOS 11.3 earlier this year. The update made general improvements to the accuracy and quality of experiences that could be built on with ARKit without significant added developer effort. It also increased the resolution of the user’s camera-based view on their screen during AR experiences.

Vertical planes :

The initial version of ARKit could only detect, track, and place objects on flat horizontal surfaces, so ARKit 1.5 added the ability to do the same with vertical surfaces and (to some extent) irregular surfaces that aren’t completely flat. Developers could place objects on the wall, not just the floor, and to a point, literal bumps in the road were no longer figurative bumps in the road.

Image recognition :

ARKit 1.5 added basic 2D image tracking, meaning that ARKit apps could recognize something like a page in a book, a movie poster, or a painting on the wall. Developers could easily make their applications introduce objects to the environment once the device recognized those 2D images. For example, a life-sized Iron Man suit could be placed in the environment when the user points the device’s camera at an Avengers movie poster.

What Apple will add in iOS 12:

That brings us to WWDC on June 4, 2018, where Apple announced iOS 12 and some major enhancements and additions to ARKit that make the platform capable of a wider range of more realistic applications and experiences. The changes allow for virtual objects that fit into the environment more convincingly, multi-user AR experiences, and objects that remain in the same location in the environment across multiple sessions.

Saving and loading maps :

Previously, AR world maps were not saved across multiple sessions, and they were not transferable between devices. That meant that if an object was placed in a scene at a particular location, a user could not revisit that location and find that the app remembered it. It also meant that AR experiences were always solo ones in most ways that mattered.

In iOS 11.3, Apple introduced relocalization, which let users restore a state after an interruption as if the app was suspended. This is a significant expansion of that. Once a world map is acquired in iOS 12, the user can relocalize to it in a later session, or the world map can be shared with another user or device using the Multipeer Connectivity framework. Sharing can happen via AirDrop, Bluetooth, Wi-Fi, or a number of other methods.

But multi-user gaming is not the only possible use case. Among other things, saving and loading maps could allow app developers to create persistent objects in a certain location, like a virtual statue in a town square, that all users on iOS devices would see in the same place whenever they visited. Users could even add their own objects to the world for other users to find.

There are still some limitations, though. Returning to a scene that has changed significantly in the real world since the last visit can obviously cause relocalization to fail, but even changed lighting conditions (like day vs. night) could cause a failure, too. This is a notable new feature in ARKit, but some work still needs to be done to fully realize its potential.

Image tracking :

Apple has added a new configuration called ARImage Tracking Configuration, which further enables building applications that focus on 2D images rather than using the full world tracking approach. This is more performant for tracking lots of images at once, so it allows for superior experiences in certain apps that are built entirely around 2D image recognition.

Object detection :

ARKit 2 also extends this to 3D objects. Fundamentally, the way it reads the real-world 3D object is similar to the way it builds world maps. As with 2D image tracking, developers must include a reference object in the app to compare the real-world. Developers are advised to track rigid objects that are texture rich, and that is neither transparent nor reflective.

Apple explains object tracking this way :

Your app provides reference objects, which encode three-dimensional spatial features of known real-world objects, and ARKit tells your app when and where it detects the corresponding real-world objects during an AR session.

The potential applications of this feature are numerous. ARKit could recognize a specific children’s action figure and add virtual objects to the scene with which the toy could appear to interact, for example—that’s basically what the LEGO app that was demonstrated at the WWDC keynote did. Or ARKit could identify a specific make and model of a car in the real world, and place a representation of the car’s name and specifications near its location in the user’s view.

Environment texturing :

Finally, ARKit 2 supports advanced environment texturing. This means a few things. First, Apple says that it trained a neural network on thousands of environments, allowing ARKit to essentially hallucinate the contents of gaps in the scene and world map with some degree of accuracy. Enlarge / Apple showed off the new reflections possible with the texturing enhancements to developers at WWDC and in this online video.

Why Apple is so focused on AR

According to some analysts and reports, smartphone sales were down for the first time in 2017. Though the iPhone X has been the world’s top-selling smartphone for much of the time since its introduction, Apple barely eked out smartphone sales in recent quarters that were comparable to those a year prior. The company is not currently in any immediate danger of disappointing stockholders—it had its best March quarter ever this year with more than $61 billion in revenue—but there may be concerning trends on the hardware front, so Apple needs to plan ahead.

The fact that Apple is still impressing investors with its quarterly earnings report is mostly thanks to two factors: the average selling price and profit margin of its phones (both are higher than much of the competition), and its growing services businesses, which include things like Apple Music and iCloud.

Developing an AR application involves much more than just choosing the right framework or SDK. At SPGON Software Solutions, we have experience in building mobile applications for major enterprises. Augmented Reality apps extend the capability to enhance business with advanced augmentation. Our iOS developers know how to utilize ARKit’s potential to the full extent. For developing successful mobile applications Please Contact Us or drop us a line at info@spgon.com


Author: Srinivasa Rao Polisetty – iOS Developer
Source: Wikipedia and developer.apple.com

Share This Information