Definition of Augmented Reality & Its Attributes
Over the past few years, the technology has already made an impact across multiple industries. It has helped enterprise businesses become more efficient. It has ushered in a new way for brands to market products. It can turn neighborhoods into virtual playgrounds. And it can add another dimension of fun and creativity to photos and videos. As a result, AR has the attention of investors.
But what is augmented reality, exactly? If you’re looking for the answer, you’ve come to the right place.
In 1994, researchers Paul Milgram, Haruo Takemura, Akira Utsumi, and Fumio Kishino defined augmented and virtual reality as points on a spectrum, dubbed the Reality-Virtuality continuum. On one end of the spectrum is the real environment seen naturally by humans. On the opposite end of the spectrum lies virtual reality, where the real environment is replaced completely by a digital environment.
Points in between the real environment and virtual reality are occupied by augmented reality, where hardware and software supplement the natural environment with digital content.
The researchers also coined the term mixed reality as an overarching classification for technology that merges the real and virtual environments, with Microsoft co-opting the term and conflating it with its own Mixed Reality platform for VR (thus confusing some consumers in recent years).
Fast-forwarding to the modern era, the realization of the textbook definition of augmented reality relies on environmental understanding by computers, as delivered through a connected camera, to deliver virtual content within the user’s field of view.
One way environmental understanding is achieved is via markers that enable the computer to track content within the environment. A marker can be created through the equivalent of a QR code that a computer’s camera recognizes as an area for the placement of virtual content. Another means of establishing a marker is a beacon that communicates its physical location to the AR device.
Conversely, environmental understanding without a marker entails building a 3D map of the environment. Initial markerless augmented reality experiences required a camera that can sense depth within the environment. Without a depth sensor, computers can employ a computer vision algorithm trained to estimate surfaces for anchoring virtual content in the environment.
Another element of environmental understanding is occlusion, which refers to real world objects blocking the view of virtual content from the point of view of the computer’s camera and its user, thus enhancing the realism of the virtual content. Typically, this requires a depth sensor, but computer vision advancements in testing have shown the ability to identify physical objects within the camera view.
Finally, realistic augmented reality experiences call for 3D content. Developers generally use the same game engines used to create virtual reality experiences, chiefly Unity and the Unreal Engine, to create augmented reality content. Along with 3D engines, AR experiences need 3D models to display in real world physical environments. Models can be created in 3D modeling programs or captured through photogrammetry of real world objects.
How Technology Delivers Augmented Reality Experiences
We’ve established that augmented reality experiences are delivered via computing devices. For the average consumer, this means smartphones and tablets, which have the cameras for reading markers or detecting surfaces and the mobility for users to orient the device to their field of view. Many current smartphones also include sensors (usually, an accelerometer, a magnetometer, and gyroscope) that enable AR apps to orient the user to its environment and virtual content.
However, for a more natural experience, head-mounted displays or fully immersive augmented reality headsets and smartglasses fit the bill. In layman’s terms, AR headsets are essentially mobile devices, with miniature displays configured to the user’s viewpoint, along with a computer, either embedded into the wearable device or connected (tethered) to an external computer. In more advanced cases, AR headsets also include depth sensors for environmental mapping.
Augmented reality has also made its way into automobiles. Heads-up displays in modern car models bring instrument panels, infotainment, and navigation into the driver’s windshield viewing area. When autonomous vehicles supplant manual models, augmented reality will likely play a role in sharing the vehicle’s view of the world — such as its recognition of other cars, pedestrians, and road hazards — with its passengers.
There are other form factors for augmented reality devices as well. For example, Lampix is an augmented reality lamp that can project an interactive workspace on any surface. Science fiction gives us examples of other forms that may arise in the very near future as well. Minority Report and Iron Man provide influences for interactive interfaces beyond AR headsets. Also, Netflix original series and movies have been a treasure trove of science fiction examples illustrating our potential real AR future, such AR contact lenses in Altered Carbon and neural implants in Anon and Black Mirror.
Modern Pioneers of Augmented Reality
Augmented reality is far from a new technology. As a military tool, rudimentary AR usage dates back to the 1960s in heads-up displays for fighter jets. And that yellow first down marker line in American football TV broadcasts? Yup, that’s a form augmented reality, too.
For all intents and purposes, though, the modern era of augmented reality tracks back to the 2010s. One of the earliest examples of mobile augmented reality was Layar, an augmented reality browser that displays waypoints in its camera view and facilitates marker-based AR experiences. Another augmented reality startup called Blippar bought Layar in 2014 to contribute to its marker-based AR platform for advertisers.
But the one advanced AR device that the general public knows best is Google Glass, which made its public debut at Google I/O in 2012. Google made the wearable device available for purchase for $1,500 through an exclusive Explorer Program in 2013, which expanded to a wider audience in 2014. Unfortunately, the device, which used a not-too-subtle display and camera mounted into its frames for showing notifications and content in the user’s field of view, faced a public backlash, with early adopters labeled as potentially privacy-invading “glassholes.” Google shelved the product for mainstream consumers, but relaunched the device in 2017 for enterprise customers, a segment that has found the technology useful for improving the productivity of various kinds of workers.
Google took another shot at augmented reality hardware in 2014 with its Project Tango platform, a combination of depth sensors for manufacturers and a development kit for building apps that can take advantage of the hardware. The first commercially available Tango device was released in 2016 via the Lenovo Phab2 Pro, which was followed in 2017 by the Asus ZenPhone AR. Google closed down the program in 2017 in favor of a toolkit designed to work without specialized hardware (more on this later).
The year 2016 ended up being a pivotal year for modern augmented reality. Microsoft made its HoloLens headset available for purchase that year after introducing it in 2015. The HoloLens set the standard for augmented reality wearables by employing a depth sensor, which was adapted from the Kinect camera accessory for Xbox, a tool for mapping physical environments which included a gesture recognition system that has become a blueprint for other augmented reality headsets. However, at a price of about $3,000, the market for the HoloLens has been limited mostly to enterprise businesses and developers on the bleeding edge.
In the mobile AR ecosystem (that is, AR you use through a smartphone or tablet), Pokémon GO became the first blockbuster augmented reality app, turning neighborhoods and parks into virtual playgrounds for players to capture virtual creatures. In addition, Snapchat first added AR camera effects, or Lenses, to its app in 2016. Snapchat’s flavor of AR has since kickstarted the marketing industry’s adoption of the technology for a number of major brands and entertainment franchises. The two aforementioned mobile AR apps have become synonymous with augmented reality, particularly within mainstream media reports that attempt to explain AR to neophytes.
The Golden & Silver Ages of AR
For some AR industry watchers, it may seem premature to define what would be the silver or golden age of augmented reality. Nevertheless, considering the proliferation of new AR technology over the past two years alone, we may very well look back on this period as the silver age of AR.
As Snapchat has continued to build upon its AR platform, adding AR content for the rear camera and enabling creators and brands to develop their own AR experiences with the Lens Studio desktop tool, Facebook has mirrored its AR strategy with its own AR platform and development app, Spark AR.
Meanwhile, Apple and Google have made it easier for mobile app developers to integrate AR into their apps with ARKit for iOS and ARCore for Android. The development toolkits use the computer vision and camera of compatible smartphones and tablets to detect surfaces for anchoring AR content, simulating environmental lighting, as well as other features that help realistically display virtual content in the real world.
While Google abandoned depth sensors for smartphones, Apple has begun to ship the cameras in its iPhone X series. Apple’s TrueDepth cameras have enabled facial recognition experiences that bleed into the AR space with tools such as Animojis. The company is reportedly working on expanding the technology to the rear camera, which would undoubtedly lead other smartphone makers to follow suit.
The platforms for mobile AR are quickly evolving, with the concept of the AR cloud, a digital copy of the world that enables multiuser experiences and persistent content in the real world, and occlusion, gradually taking hold. So far, projects including the Niantic Real World Platform, 6D.ai, and Ubiquity6 are among the leading AR cloud platforms in beta testing.
And after years of hype, Magic Leap finally released its AR headset, the Magic Leap One, in 2018. The device marks the strongest challenge to the HoloLens yet, with similar spatial computing and user interface capabilities at a slightly lower price. While it’s still outside of the range of the sweet spot for mainstream consumers, Magic Leap has begun to roll out a content line-up that appears to be drawing more interest from the consumer market, at least in terms of online chatter and general curiosity (it remains to be seen if that interest will translate into sales).