The confluence of augmented reality, artificial intelligence, and the Internet of Things is rapidly giving rise to a new digital reality.
Remember when people said mobile was going to take over?
Well, we’re there. Some of the biggest brands in our world are totally mobile: Instagram, Snapchat, Uber. 84% (!) of Facebook’s ad revenue now comes from mobile.
And mobile will, sooner or later, be replaced by augmented reality devices, and it will look nothing like Google Glass.
Not the future of augmented reality.
Why some predictions fail
When viewing trends in technology in isolation, it’s inevitable you end up misunderstanding them. What happens is that we freeze time, take a trend and project the trend’s future into a society that looks almost exactly like today’s society.
This drains topics of substance and replaces it with hype. It causes smart people to ignore it, while easily excited entrepreneurs jump on the perceived opportunity with little to no understanding of it. Three of these domains right now are blockchain, messaging bots, and virtual reality, although I count myself lucky to know a lot of brilliant people in these areas, too.
What I’m trying to say is: just because it’s hyped, doesn’t mean it doesn’t deserve your attention. Don’t believe the hype, and dig deeper.
The great convergence
In order to understand the significance of a lot of today’s hype-surrounded topics, you have to link them. Artificial intelligence, smart homes & the ‘Internet of Things’, and augmented reality will all click together seamlessly a decade from now.
And that shift is already well underway.
The first time I heard about AI was as a kid in the 90s. The context: video games. I heard that non-playable characters (NPCs) or ‘bots’ would have scripts that learned from my behaviour, so that they’d get better at defeating me. That seemed amazing, but their behaviour remained predictable.
In recent years, there have been big advances in artificial intelligence. This has a lot to do with the availability of large data sets that can be used to train AI. A connected world is a quantified world and data sets are continuously updated. This is useful for training algorithms that are capable of learning.
This is also what has given rise to the whole chatbot explosion right now. Our user interfaces are changing: instead of doing things ourselves, explicitly, AI can be trained to interpret our requests or even predict and anticipate them.
Conversational interfaces sucked 15 years ago. They came with a booklet. You had to memorize all the voice commands. You had to train the interface to get used to your voice… Why not just use a remote control? Or a mouse & keyboard? But in the future, getting things done by tapping on our screens may look as archaic as it would be to do everything from a command-line interface (think MS-DOS).
There are certain benefits to command-line interfaces… (xkcd)
So, right now we see all the tech giants diving into conversational interfaces (Google Home, Amazon Alexa, Apple Siri, Facebook Messenger, and Microsoft, err… Tay?) and in many cases opening up APIs to let external developers build apps for them. That’s right: chatbots are APPS that live inside or on top of conversational platforms.
So we get new design disciplines: conversational interfaces, and ‘zero UI’ which refers to voice-based interfaces. Besides developing logical conversation structures, integrating AI, and anticipating users’ actions, a lot of design effort also goes into the personality of these interfaces.
But conversational interfaces are awkward, right? It’s one of the things that made people uncomfortable with Google Glass: issuing voice commands in public. Optimists argued it would become normalized, just like talking to a bluetooth headset. Yet currently only 6% of of people who use voice assistants ever do so in public… But where we’re going, we won’t need voice commands. At least not as many.
The Internet of Things
There are still a lot of security concerns around littering our lives with smart devices: from vending machines in our offices, to refrigerators in our homes, to self-driving cars… But it seems to be an unstoppable march, with Amazon (Alexa) and Google (Home) intensifying the battle for the living room last year:
Let’s converge with the trend of artificial intelligence and the advances made in that domain. Instead of having the 2016 version of voice-controlled devices in our homes and work environments, these devices’ software will develop to the point where they get a great feeling of context. Through understanding acoustics, they can gain spacial awareness. If that doesn’t do it, they could use WiFi signals like radar to understand what’s going on. Let’s not forget cameras, too.
Your smart device knows what’s in the fridge before you do, what the weather is before you even wake up, it may even see warning signs about your health before you perceive them yourself (smart toilets are real). And it can use really large data sets to help us with decision-making.
And that’s the big thing: our connected devices are always plugged into the digital layer of our reality, even when we’re not interacting with them. While we may think we’re ‘offline’ when not near our laptops, we have started to look at the world through the lens of our digital realities. We’re acutely aware of the fact that we can photograph things and share them to Instagram or Facebook, even if we haven’t used the apps in the last 24 hours. Similarly, we go places without familiarizing ourselves with the layout of the area, because we know we can just open Google Maps any time. We are online, even when we’re offline.
Your connected home will be excellent at anticipating your desires and behaviour. It’s in that context that augmented reality will reach maturity.
You’ve probably already been using AR. For a thorough take on the trend, go read my piece on how augmented reality is overtaking mobile. Two current examples of popular augmented reality apps: Snapchat and Pokémon Go. The latter is a great example of how you can design a virtual interaction layer for the physical world.
So the context in which you have to imagine augmented reality reaching maturity is a world in which our environments are smart and understand our intentions… in some cases predicting them before we even become aware of them.
Our smart environments will interact with our AR device to pull up HUDs when we most need them. So we won’t have to do awkward voice commands, because a lot of the time, it will already be taken care of.
But who actually wants to bother with that, right? What’s the point if you can already do everything you need right now? Perhaps you’re too young to remember, but that’s exactly what people said about mobile phones years ago. Even without contact lenses, all of these trends are underway now.
Augmented reality is an audiovisual medium, so if you want to prepare, spend some time learning about video game design, conversational interfaces, and get used to sticking your head in front of a camera.
There will be so many opportunities emerging on the way there, from experts on privacy and security (even political movements), to designing the experiences, to new personalities… because AR will have its own PewDiePie.
It’s why I just bought a mic and am figuring out a way to add audiovisual content to the mix of what I produce for MUSIC x TECH x FUTURE. Not to be the next PewDiePie, but to be able to embrace mediums that will extend into trends that will shape our digital landscapes for the next 20 years. More on that soon.
And if you’re reading this and you’re in music, then you’re in luck:
People already use music to augment their reality.
About the Author
This article was written by Bas Grasmayer. Bas is the Product Director at IDAGIO: streaming, reinvented for classical music. He write about trends and innovation in tech and how they may impact the music business. see more.