Augmented Reality Development: Technology, Process, Cost

Augmented Reality Development: Guide for Business Owners and Managers

Andrew Makarov - Head of Mobile Development at MobiDev
Andrew Makarov,
Augmented Reality Solution Architect at MobiDev

Augmented reality (AR) has seen a significant uptick in commercial support from big tech names like Google, Apple, Amazon and Microsoft in recent years. Gartner indicates 100 million consumers will actively shop online using AR by 2020. The number of AR-enabled devices is expected to reach 2.5 billion by 2023.

software development through crisis

We’ll explore the current state of Augmented Reality technology in 2020 and compare the most popular AR SDKs and tools.

Augmented Reality Development Guide For Business Owners

Download PDF

Augmented Reality Technology Overview

To understand what is Augmented Reality technology and how it works we will break the AR ecosystem up into four classes based on devices suitable for AR apps:

  • Mobile-based AR
  • Head-mounted gear, for example, Microsoft HoloLens
  • Smart glasses like Google Glass and competing products
  • Web browser-based AR

Augmented Reality experience classification basing on devices type
Mobile apps on phone and tablet platforms are focused on AR overlays that leverage a combination of the device’s processing power, camera lenses and internet connectivity to provide an augmented experience.

Headsets are built to deliver highly immersive experiences that mix augmented and virtual reality environments, and Microsoft HoloLens and Magic Leap are the industry leaders in the Augmented Reality development.

Smart glasses are light-weight and low-power wearables that provide first-person views, and these include products like Google Glass Enterprise Edition and Vuzix Blade.

BACK TO TOP >>>

Mobile Augmented Reality

The mobile Augmented Reality development sector is easily the one that is most readily accessible to both consumers and companies. Mobile AR apps are built on phone and tablet platforms with AR content visible on the screen as a hologram.

The 2017 introduction of Apple’s ARKit and Google’s ARCore Augmented Reality software development kits (SDKs) has standardized the development tools and democratized mobile AR app creation which brought about more than doubled amount of mobile AR-enabled devices and tripled the number of active users during 1.5 years.

We can see a significant ARKit’s dominance over ARCore, however the latter has grown almost 10 times in absolute figures.
Mobile AR installed base for ARKit and ARCore in 2017-2019
Searches of commit repositories show that a preference for ARKit by Augmented Reality developers. Despite the fact that both platforms were announced within a few months of each other, there has been three times more development on and two times greater frequency of discussion about ARKit than ARCore. Each of these frameworks has the necessary resources for Augmented Reality developers to create robust virtual environments on top of real-world scenes.

Indicators of mobile Augmented Reality development by platform

May 2020 ARKit, iOS ARCore, Android
Github repository results 3,967 1,483
Stackoverflow question threads 2,409 1,342

Augmented Reality Development for iOS: ARKit

It’s worth taking a moment to discuss some of the solutions that will allow you to get the most out of the ARKit framework.

USDZ file formats for Augmented Reality Development

At the WWDC 2018, Apple announced the introduction of a new file format named USDZ in ARKit 2.0. The goal of USDZ is to standardize 3-D imaging data for AR development that ultimately reduces compatibility issues and lets developers focus on coding. That means savings in both time and money.

Extended Face Tracking

The iOS face-detection system is hardware-dependent, and is supported in iPhone X, iPhone XS, iPhone XS Max, iPhone XR with TrueDepth Camera. The technology allows the detection of faces and its tracking is more precise and faster than the similar software-driven face detection that Android-based devices employ.

Try-on capabilities for fashion and beauty shopping apps are the main drivers of consumer application of the face detection feature. Social media and messaging platforms utilize the tongue or winking tracking feature to let users augment communication with virtual filters and masks.

The video below shows how iPhone employs AR-based gaze-tracking technology that creates use cases for device controlling performed with eyes, from serving users with limited abilities to hands-free experience and monitoring. Combined with computer vision, face-based AR opens up further opportunities for facial recognition apps.

Multi-user Experiences or Shared AR

The key purpose of shared AR is to produce a scene that multiple users can recognize from their own perspectives. This is accomplished using ARWorldMap, a snapshot of all the available spatial mapping information from the users’ devices. 3-D AR objects are then mapped onto the terrain just like actual objects in the real world. Done well, this process creates the illusion of a shared virtual space that each of the users can interact with seamlessly.

Due to high level of interactivity multi-user AR experience makes education, presentation and collaboration easy and effective.

The video below explains how remote AR, a customized version of the shared AR experience, may be adopted in the industry. Remote workers can collaborate with each other or consult with experts. By screencasting from their own devices, they can deal with problems in minutes that used to take months or years of back-and-forth communication to solve. WebRTC-based system permits the communication of notations, video, voice and metadata in real time.

Reflection Mapping

For a realistic AR experience objects in the 3-D space must have reflections. ARKit provides developers with a library of automatically generated textures to create reflections and apply them to objects in apps.

Image tracking

ARKit is capable of recognizing 2-D images in the real world and then impose a virtual image over the original one. It also tracks the image movement in real time. For developers, this means being able to attach a virtual content to surfaces, from business cards to billboards. The video below shows a use case for AR in marketing and brand promotion using AR.

3-D Object Detection

Object detection feature has brought about virtual manual apps to appear for cars, home equipment, and machinery. Various use cases are already on the marquee as Ask Mercedes, IKEA AssembleAR, and Hyundai Virtual Guide.

Tracking of 3-D real-world objects is a feature that is currently missing from ARKit. Hopefully, Apple will add it to the new version. Meanwhile, the arrival of iOS 13 and ARKit 3.0 at WWDC in June of 2019 has brought the developers a couple of new features:

  • Tracking multiple faces
  • Motion capture
  • Detecting when people are occluded
  • Concurrent use of both the back and front cameras on the phone

We are expecting new use cases with enhanced immersive AR experiences to appear after its official release.

Augmented Reality Development for Android: ARCore

Not meaning to be outdone by its competitors at Apple, Google has also pushed ARCore development to keep pace with ARKit. Let’s check out some of the benefits ARCore offers app developers and businesses.

Cloud Anchors

To allow users to drop virtual objects into a scene in a shared physical space, ARСore uses Cloud Anchors. The objects are seen by multiple users based on their perspective. Cloud Anchors also allow collaborative experiences to be shared with individuals on Apple platforms as well.

The demo below explains how we can use the ARCore object detection feature in a virtual user manual.

Augmented Faces

By generating a 468-point 3-D of a user’s face, ARCore enables developers to provide high-quality renderings of people. After the user’s face is identified, masks or filters can be applied that starts various use cases for AR apps.

Augmented Images

The image detection and tracking feature opens up the same sorts of use cases that were previously discussed for ARKit. From virtual business cards to applying advertising posters on large surfaces, marking content to it for use in virtual manuals or promotional materials. These 2-D markers can even be used as navigation markers in AR indoor navigation. You can watch an example of ARCore-based indoor navigation solution.

What is better: ARKit or ARCore?

Just as Apple and Google chose to fight it out in the world of mobile phones and apps, they’re also going to war for market share in the AR sector. Each company is already very deep into building platforms for AR, and they’re both intent on being there as AR goes from revolutionary to mainstream.

For both of them, that’s a fight over platforms, installed users and billions of dollars in cash. Apple has very successfully positioned itself as a lifestyle brand. There’s a reason that many startups are flooding the market with ARKit-based apps aimed at the iOS audience. It is perceived as more affluent and therefore a better source of revenue to cash-hungry startups.

However, ARCore is bigger in terms of market size. The installed base of ARCore-compatible Android devices grew from 250 million devices in December 2018 to 400 million in May 2019.

Both platforms offer pretty much the same tools in terms of understanding environments, access motion sensors and monitoring change of lighting conditions; both ARKit and ARCore are compatible with the Unity framework. However, there are a few important differences. ARCore holds the upper hand when it comes to mapping. It collects, understands and stores 3-D environment information in a way that makes later use or repurposing simple. In the ARKit environment, only a small amount of info about local conditions is stored, meaning that storage of mapping data is limited to a “sliding window” of what has been recently experienced. ARCore produces a larger mapping dataset that lends it significant gains in both speed and stability. However, the ARWorldMap feature appeared in ARKit 2 smoothed this difference.

On the flip side, ARKit dominates in terms of recognizing and augmenting images. You can actually check out this comparison of how the two SDKs work by viewing the demo video below.

A user takes a look at the famous painting of the Mona Lisa, and then the app replaces the real picture with a virtual one. Watch as the subject of the painting blinks her eyes when the user taps on the virtual object version in the app.

BACK TO TOP >>>

It will be interesting to see how each platform’s competitive advantages play out. In other words, it is likely deeply unwise to pick a winner and a loser in this contest. Most likely, your business is going to have to create solutions that users can access on both platforms. That means it’s essential to understand the commonalities, strengths, and weaknesses of both ARKit and ARCore.

Roman Markov - Android Ios Expert at MobiDev

Roman Markov

Android / iOS Expert

Augmented Reality Development for Wearables

A wearable device is one that is either overtly strapped to the head or a set of glasses. While they go about the job in very different ways, the dominants major form factors in wearables are:

  • Head-mounted gear like the Microsoft HoloLens
  • Glasses in the style of Google Glasses

It cannot be stated strongly enough the differences between these devices. HoloLens aspires to be the solution that blurs the line between AR and VR giving rise to Mixed reality (MR). While there is still a see-through display involved, virtual objects are rendered in the user’s line of sight and placed over the real world environment.

Conversely, Google Glasses’ core concept is to use the glasses as a layer where augmentation of everyday content can be done using a 2-D display of information. It does much the same work as a smartphone, and it provides support for apps, taking and storing photos and video, wayfinding, mapping and voice-based internet search. Digital content is projected into only one eye, and all information is displayed in 2-D rather than 3-D. In other words, it’s a screen.

The companies’ goals for the products are also different. Each one has its use cases, and it’s wise to overview these players in the wearable sector separately rather than comparing them as competitors of any kind.

Augmented Reality development company MobiDev

Microsoft HoloLens

87% of respondents surveyed by Harvard Business Review Analytic Services expressed interest in MR use cases, pilots or product deployments. Likewise, the survey indicated that most companies were confident that the now-robust technologies in the wearables sector would allow them to achieve gains in productivity, employee training and customer satisfaction.
Mixed Reality adoption across industries, 2018
Utilizing MR to provide remote support and assistance, collaborate, conduct inspections and perform repairs represent some of the most practical use cases companies see for HMDs. They also see plenty of value in data visualization. Many companies see HMDs as primarily cost savers in the sense that augmentation for workforce tasks can speed things up and provide quicker access to critical information. Boeing reported 40% improvements in productivity of electricians who used 3-D wiring diagrams on the plane. Airbus has discovered more than 300 MR-based use cases.

What makes the HoloLens so impressive is the suite of sensors it employs to scan the user’s environment and generate 3-D meshes of those surfaces. This is one of the main reasons the HoloLens has been so successful in blurring the lines between AR and VR. It goes one step further than other products and treats the environment itself as just a very large object in the 3-D space.

Microsoft has been able to leverage its existing Kinect technology from its Xbox video game systems. The accuracy of its scanning and mapping to the 3-D virtual space is tight enough that it can flawlessly determine whether the user is touching and interacting with something in the app’s environment or the real world. In other words, HoloLens isn’t just layering.

The semi-transparent design of the actual lenses in the HoloLens means that objects are always somewhat see-through. It’s a feature because it’s important to address safety, prevents a user clumsily running into real-world objects.

Arriving in 2019, the HoloLens 2 features real-time eye tracking, see-through holographic lenses and a more powerful and faster processor, the Qualcomm Snapdragon 850. The company has stated that it can be worn for several hours. Likewise, the headband is meant to be more ergonomic, and the visor will now allow the user to flip it up to disengage from the MR environment.

Microsoft HoloLens 1 and 2 features

The HoloLens platform is impressively feature-rich, and it’s worth taking a moment to appreciate a few of its greatest strengths.

Interact with 3-D Content

For users, who work with plans and materials, the HoloLens is the best way to interact with schematics and instructions. This makes it an ideal platform for engineers, designers, and architects where it’s easy to manipulate virtual objects in real time.

Hold Mixed-Reality Meetings

With Hololens, bringing folks together in a meeting, enables sharing of 2-D or 3-D graphics in real time with alterations being made on the fly. Remotely located team members can be represented in the virtual space by avatars.

Field Service Assistance

While performing repairs and maintenance, technicians often get stuck in situations where it’s neither feasible nor safe to remove their hands from a system. Simultaneously, they do need to pull information from manuals, databases and control units. They may also need to ask a question of someone elsewhere, and pulling out a phone isn’t always a great option. With HoloLens, they can access resources in real time through a voice- or gesture-driven interface.

Training

All the classroom learning in the world cannot function as a substitute for getting visual, tactile and kinetic with objects. Especially for employees who don’t always learn well when being instructed verbally, HoloLens offers them the chance to train in a hands-on environment. They get immediate feedback while also being able to rewind the scene to whatever might have gone wrong.

This can be a huge leap forward in fields where training is risky or expensive. For example, a first-year medical student can’t be asked to perform many procedures on a live person. The HoloAnatomy, though, opens up the chance for them to get a hands-on experience they’d otherwise have to wait years to receive.

Space Planning

Looking at an empty space and imagining what it can be is a challenge for many people. While there are whole industries, such as interior design, that have grown up around visual creators who can picture the future of a space, the rest of humanity often struggles to see it. For creators, the HoloLens makes it possible to deliver an “oh” moment to a client. For customers, it means not having to take the designer’s word for it that everything will look amazing.

Microsoft doesn’t see HoloLens an expensive VR toy for gaming enthusiasts. It’s a step toward the logical conclusion for mixed-reality devices, and the target buyers are users who need to change the ways we work, learn and convey information.

In the long run, Microsoft has set a goal of getting to a form factor that duplicates that of reading glasses. A 2019 patent from the company indicates Microsoft plans to deliver to market that approximates the largest set of glasses the company can picture a person comfortably wearing. They’re likely to achieve this goal in the coming years.

Alternatives for Microsoft HoloLens

When it comes to anything resembling a use case from a viable competitor, the only company in the space that has a product that holds a candle to HoloLens is The Magic Leap One is backed by $2.3 billion of funding from Google and other major tech investors.

A mere industry rumor for 7 years and possible just vaporware, the headset eventually arrived in 2018. Reviewers frequently rated the experience superior to the HoloLens 1. In particular, the Magic Leap One delivers about a quarter better field of view of the HoloLens 1. On the downside, that actually puts the company a generation behind Microsoft, as the HoloLens 2 has a similarly sized field of view. That said, Magic Leap is the only alternative on the market that achieves a competitive level of quality.

Field of view comparison for HoloLens 1 vs HoloLens 2 vs Magic Leap One

Where the Magic Leap One manages to really outcompete the HoloLens is pricing, costing about $1,000 less than HoloLens 2. The company has also entered into several entertainment industry partnerships for augmenting 2-D television programs. In this regard, the Magic Leap HMDs appear to be much more targeted at the consumer sector, although business use cases should not be discounted.

Microsoft crushes the competition, though, on the basis of business support. Thanks to a long history of relationships with companies across numerous industries, Microsoft has loads of experience meeting enterprise expectations. It can readily build on what it has learned from developing products like Office or Azure to service its customers better. For Magic Leap and its business use cases, this presents the conundrum of whether enterprise clients want to take a chance on dealing with a fresh entrant or an experienced and respected heavyweight.

Practical Challenges of HoloLens App Development

The environment for each use case determines most of how well HoloLens performs in real life. Performing spatial mapping, Hololens scans a room with an infrared (IR) illumination and creates triangle meshes of the surfaces. HoloLens can have trouble with the task if you:

  • Walk around a dimly-lit room;
  • Encounter a black, planar surface that doesn’t reflect IR light well;
  • Deal with objects that react differently to infrared or lack visual consistency from all angles;
  • Observe moving objects or people walking.

These issues often force HoloLens in rescanning, and the scene can also fail to match well with the original spatial map the system makes at the beginning of a session. On average , the best use cases tend to be for uniform and well-lit environment.

Thanks to a robust framework, the HoloLens isn’t a challenging platform for coders. Instead, the biggest challenges tend to be in the creation of 3-D models. In particular, there can be a difficult balancing act between performance and speed.

Both choices risk breaking the illusion of realism, and often the decision boils down to practicality versus visual impressiveness. It’s best to constrain poly counts in the design process, reducing models from 100,000 to 1,000 triangles.

As a project moves ahead, it’s wise to contain expectations. Given that the UI/UX experience is already being defined by consumer mobile phone apps, this can pose some problems. Users need to get accustomed that employing HoloLens calls for some understanding of things like the use of semi-transparent 3-D objects versus solid ones and forming the skills required to use gestural and voice input commands.

Alex Vasilchenko - Web Group Leader

Alex Vasilchenko

Web Group Leader

Augmented Reality development company MobiDev

BACK TO TOP >>>

Augmented Reality Development for Smart Glasses

The Google Glasses launch was massively hyped in 2013, but the initial momentum among consumers eventually wore off. Business use cases, however, did appear, and the company rebranded the product with what it calls the “Google Glass Enterprise Edition.” Thanks to Google’s robust support for both apps and device APIs, the enterprise version of the smart glasses has attracted a following.

One measure of the success of smart glasses has been the recent appearance of competitors. As can be expected any time one of the two companies has proven a market, Apple has sought to compete with Google. Many mid-tier tech players have entered the game, and Vuzix Corporation is an especially strong standout in that group.

The niche for smart glasses has largely been a combination of similar use cases for the HoloLens. Generally, the systems are favored for work that demands hands-free engagement or that calls for providing visualization while looking at a real-world environment. In particular, smart glasses are ideal for situations where the bulk of HMDs isn’t worth the payoff in total performance.

Clients typically see value in several specific applications, including:

  • Viewing 2-D schematics
  • Information-intensive jobs that call for regularly being away from a workstation
  • Quickly checking files
  • Showing colleagues and supervisors what the wearer sees
  • Live streaming of first-person video

Several interesting and highly practical uses cases for Google Glass Enterprise Edition have emerged over the last couple of years. Major corporations like GE and DHL have utilized them to boost worker productivity. An intriguing entry comes from AugMedix, a company that provides smart glasses and apps to physicians who provide patient care, leading to productivity increases of nearly one-third among their customers. EyeSucceed has entered the market with a hands-free learning system that allows trainers to follow along with new workers by watching their live streams in first-person views.

Google is also upping processing power and battery life with the new generation of smart glasses. The Google Glass Enterprise Edition 2 is designed around the Snapdragon XR1 platform. Another improvement is coming in the form of frames from Smith Optics that are much closer to what we see with traditional reading glasses, meaning the newest smart glasses will easily blow past both the HoloLens and Magic Leap platforms in terms of lightness.

Providing smart glasses to people who already wear glasses due to vision problems is a big issue. These buyers will likely need a way to acquire bespoke solutions without incurring significantly higher costs.

Vuzix is highly interested in addressing the question of prescription lenses in this sector. At CES 2019, the company announced consumer-grade devices called Blade AR Smart Glasses that are intended to provide prescription lenses in a consumer-friendly package for about $1,200. Working from the XR1 platform that borders on ubiquitous in the industry, Vuzix intends to deliver a product that has the processing power to handle vision-based machine learning on the fly. Combined with AR functionality, these glasses are expected to define the standard object classification.

Artem Kravchenko - Ios Developer at MobiDev

Artem Kravchenko

iOS Developer

BACK TO TOP >>>

Web-Based Augmented Reality Development

For the vast majority of the public, using mobile apps has been the only way to make use of AR functionality. Moving past the app economy is important for one big reason: most users aren’t especially committed to apps. According to surveys, one in 5 consumers will abandon an app after using it once. Likewise, 7 in 10 app users will churn within 90 days.

Mobile apps retention rate stats

Even those who commit to apps aren’t deeply in love with them. Of the 80 or so apps the average user employs, only 10 of them will be used each day. Just a 30 of them will be used each month.

Particularly for e-commerce companies, that represents a scary lack of loyalty. You simply don’t want to put millions of dollars into creating an AR app that will likely be forgotten by the downloader within three months.

One solution that has great appeal is web-based AR. Rather than requiring the user to download an app, all the functionality is packed into the phone’s web browser. AR-enabled websites get access to the same features as apps, and users lose the hassle of constantly installing and uninstalling barely-wanted software. Powering this revolution is the WebXR API. It is built to provide support for both AR and VR scenes right in web browsers.

The key for success of Web-based AR will be a greater number of devices to support it and more websites to have it enabled. As more Apple owners upgrade to iOS 12, Android fully releases it in Chrome, and when Apple, Samsung and Microsoft web browser adopt the WebXR standards, the device support will improve. Although these standards have yet to be established, the implementation of AR in browsers is also performed by means of either porting existing libraries (e.g. AR.js) or developing new ones (e.g. A-Frame, React 360).

Yuriy Luchaninov - Javascript Group Leader at MobiDev

Yuriy Luchaninov

JavaScript Group Leader

Use cases are still coming along, but limited compatibility is a drag on the industry. Some companies are starting to enable this, including the retail platform Shopify or virtual artist by Sephora – as time goes on, more will.

BACK TO TOP >>>

Augmented Reality Development Guide For Business Owners

Download PDF

Augmented Reality Development Main Takeaways

Augmented Reality development is primarily about finding a solution to overcome software and hardware limitations. In spite of the technology breakout and fast growing development tools update, there are still blind spots developers face when dealing with various devices and OS versions. It is more typical for Android due to the vast range of the devices and ARCore still being in its infancy.

In the next article we will guide you through Augmented Reality development process and showcase an AR app development case study, discovering how to use Augmented Reality for business needs.

Want to get in touch?

contact us
Insights
9 Augmented Reality Trends to Watch in 2020: The Future Is Here

9 Augmented Reality Trends to Watch in 2020

Insights
Natural Language Processing (NLP) Use Cases in Business

Natural Language Processing (NLP) Use Cases for Business Optimization

Insights
Business Analysis Deliverables List For Software Development Projects

Business Analysis Deliverables List For Software Development Projects