Will AR Make Us Masters of the Information Age?

A new wave of AR applications aims to help people better understand and interact with the world.

Augmented reality (AR) is often confused with virtual reality (VR). Each allows us to see and interact with reality in very different ways. Both are evolving rapidly, but innovations in AR could help people become masters of the Information Age.

In VR, we are completely immersed in a computer generated environment, leaving the physical world behind. Common examples of VR include flight simulator training, real estate walk-throughs, exposure therapy for phobias and immersive, 3D video games.

In AR, we remain in our physical surroundings, seeing and interacting with the real world. You can use AR to instantly translate signs or menus into the language of your choice, point at and identify stars and planets in the night sky and delve deeper into a museum exhibit with an interactive AR guide.

Findings from a recent survey by the Pew Research Center indicate the vast majority of American Internet users believe the web “helps them learn new things, stay better informed on topics that matter to them, and increases their capacity to share ideas and creations with others.”

Internet-connected devices, especially smartphones, tablets or even a pair of smart eyeglasses, bring supplemental or reference information and multimedia to what we already see in the real world. AR companies like Metaio and Blippar are pioneering this for the mainstream, with applications ranging from education to retail.

AR Turns a Corner

AR is hitting a second wave that will profoundly change the way we experience reality. It is moving beyond a digital layering of information atop reality to combine with wearable technology, sensors, machine-learning, artificial intelligence, big data and the Internet of Things.

Contrary to pop culture perceptions, this new era of technological realities is not about becoming cyborg-like, supplanting human ability or replacing the human imagination. Instead, it’s about extending human capacity to design unprecedented experiences, according to many researchers who believe we have an entirely new medium on our hands.

Dr. Steve Mann, father of wearable computing, is considered to be the world’s first cyborg, inventing and wearing personal computers that have assisted his eyesight since the 1970s.

When asked at his 2013 keynote at Augmented World Expo in Silicon Valley what the killer app for AR is, Dr. Mann responded, “Reality.”

Mann explained that like any other technology, for it to succeed, it has to make our lives better.

In her 2009 TED talk, athlete Aimee Mullins explored the idea of prosthetic limbs making their wearers “super enabled” rather than disabled.

“It is no longer a conversation about overcoming deficiency,” she said. “It’s a conversation about augmentation. It’s a conversation about potential.”

AR threads together the real and digital worlds, but in order to become as common and natural to our daily lives as personal computing, we must understand and eventually trust its capabilities in order to experience the true benefits.

This next wave of AR presents shifts from static world applications to more fluid, contextually adaptive situations, where our devices will be highly cognizant of our preferences and movements through changing environments.

Museums, for instance, are beginning to mine specific information from visitors to help deliver more personalized experiences.

Sree Sreenivasan, chief digital officer at New York’s Metropolitan Museum of Art said, “I want to be able to know exactly what people have seen, what they love, what they want to see more of, and have the ability to serve it up to them instantly.”

For example, he said, “If someone loves a painting they’re looking at, they could get an instant coupon for the catalog, or a meal being sold at the cafeteria that’s based on it.”

AR Gets Personal

Future AR applications could change our relationship with personal devices.

Dr. Genevieve Bell, an anthropologist and Intel Fellow working at the intersection of cultural practice and technology adoption, describes a world where we have more reciprocal relationships with our devices in order for them to look after us, anticipating our needs and even doing things on our behalf almost like an invisible assistant.

Carolina Milanesi, vice president of research firm Gartner, states that by 2017, our smartphones will be more alert if not smarter than us, at least about many things.

“If there is heavy traffic, [your smartphone] will wake you up early for a meeting with your boss, or simply send an apology if it is a meeting with your colleague,” said Milanesi.

“The smartphone will gather contextual information from its calendar, its sensors, the user’s location and personal data.”

Gartner’s research claims this will work with time-consuming menial tasks, such as calendaring or responding to mundane email messages. Once we become more confident in outsourcing to our smartphones, we will grow accustomed to apps and services taking control of other aspects of our lives.

But are we ready to entrust more of our lives to the new AR capabilities of intelligent devices?

The late John Rheinfrank described a framework in which users engage with an adaptive system to “build worlds that collaboratively participate in the [co-evolution] of our individual and collective abilities.”

He describes this as worlds that shift “to meet our abilities, to anticipate whatever they are or what we want them to be.”

In Spike Jonze’s film “Her,” Samantha, the intelligent operating system, connects to everything in her user Theodore’s world, even taking his thoughts out to the Internet to find comparisons. Some might argue this helped Theodore be more human, while others saw him losing his grip on reality.

In discussing the film, Intel Futurist Brian David Johnson described how for decades our relationship with technology has been based on an input-output model. Essentially, it has been a command-and-control relationship.

If commands aren’t communicated correctly or our device doesn’t understand our accent, that relationship screeches to a halt.

Today, our computing devices know us better than ever thanks to services that track our health, pay our bills and notify us there’s traffic ahead. But despite increasing intelligence from things such as AR applications, Johnson states that technology is still just a tool that must serve human values.

“We can have the ability to design our machines to take care of the people we love, allowing us to extend our humanity,” he said, referring to our ability to design “our better angels.”

This new wave of AR may make us even more reliant upon technology, but if designed right, with human interaction at the core, then AR can free us from screens, allowing us to focus deeper on relationships in the real world aspects that we love.

The question we need to ask as AR forges ahead, said Johnson, is “What are we optimizing for?”

The answer needs to be: to make people’s lives better.

This article also appeared on Intel IQ.


Helen Papagiannis