The team at Voyant AR have had a lot of fun this year developing and experimenting with AR experiences. It’s made us reflect on how different it is from other digital mediums such as web, mobile and even VR. While VR and AR development may share common tools and frameworks, the process for designing and understanding user experience is VERY different.
Working through various prototypes, we kept note of important issues or surprising discoveries, with the goal of creating an internal design playbook. Our work also stimulated lots of discussion about where AR is headed, what the future may look like and what impact it will have on daily life. Here are the common themes based on findings we have discovered so far and potential implications for designing future AR experiences.
AR devices will soon replace today’s computers
Let that sink in for a moment. We won’t need a smartphone, laptop or desktop computer. All we’ll need is a pair of AR glasses. Because those glasses will be your computer. You’ll wear them all day, every day.
Designers need to start thinking about the future state of AR – now. We may not yet have all the software and hardware required to make an AR experience as immersive and frictionless as we’d like it to be, but we can imagine and design for that future, then build something today that’s as close as we can get. We’ll have created the foundation for that future experience and as the building blocks and tools become available we’ll be able to adapt quickly. The advantage that our team has found with this approach is that we’re already thinking of future use cases and how we might solve them, even if we can’t do that today.
Assume you can, until you can’t
At this point, designing for AR is a bit like the Wild West. There are no definitive guides, best practices or agreed upon approaches on how to best design for AR. As designers we’re truly blessed to be part of this new era. It’s a once in a lifetime opportunity, to be given the chance to play, experiment and explore this amazing new medium and help shape what’s to come.
When there are so many unknowns with a new technology, there is a tendency to start with creating something easy or simple, just to test things out initially and then slowly build on that. But in setting the bar so low, so early – you’ve already limited what’s possible. Besides, how do you really know what will be “easy” or “simple” to create such a thing?
We need to alter that mindset. When it comes to designing for AR – think big! Then think bigger! The shift towards spatial computing is such a profound transformation that it will impact every aspect of our lives. We don’t know, what we don’t know. Our current perspective (and subconscious bias) has been shaped by previous experience designing for web, mobile, social and VR. Some practices might be applicable to AR, but most won’t. So imagine what will be possible – what you want to be possible – and then build it.
To expand a team’s perspective, it helps to include subject-matter experts or users who have little technical expertise to provide feedback on the design and prototyping phase. This is good practice for designing user experience in general, but it’s especially helpful for AR. We have found that their feedback and suggestions aren’t as likely to be influenced by prior experience or knowledge. Rather, they’re focused on what makes a good experience and what they’d like to see.
AR “apps” won’t be a thing
Most AR content today is available via an app. But the current “app paradigm” which underpins the distribution and consumption of mobile content will NOT be part of the future state of AR. It is far more likely that AR content will be distributed via layers or filters in which people can tune in or out as they please. (Check out this article by Matt Miesnieks on Why the YouTube of AR won’t be YouTube.)
Today’s mobile apps operate in silos. You open, interact, then minimise/close them, before opening another. You might feel like you’re multi-tasking with several apps at once but you’re just swapping rapidly between them. True integration would be like playing Robot Unicorn Attacks in your email inbox. Voice assistants (like Google Home, Amazon Echo) are the closest we have today to that type of seamless integration. Behind the scenes, the system is accessing different apps but what the user perceives is a single easy-to-use interface.
Future AR content, just like voice assistants, will have an “always on” state. That content will live in your environment, appearing and changing based on your behaviour, explicit commands and pre-programmed contextual-awareness (more on this in a future blog post).
What does that mean when designing for future AR experiences? Context becomes king. Where are they and what are they doing? What is going on around the user? Get your hands on any tools that allow you to start exploring this now. We’re already looking at integrating location based data (Project Chombo), natural language processing and machine learning (Project Evan) to enhance user experience based on contextual awareness.
Although it may not have too much impact for now, it will be important to consider how your AR content may interact (or not) with other AR layers. For example, imagine your AR content is an 80’s layer that transports the user back to 1988 by superimposing retro content over real world advertising (a concept that Julian Oliver explored with Artvertiser in 2008, transposing art over ads) while playing a radio feed from that era. That AR content layer might include a plugin that applies an “80’s style filter” to communication tools like email, chat or voice calls.
Aesthetic meets functionality
If AR content is “always on”, it makes sense that it should be displayed or conveyed in an aesthetically pleasing manner. One might want an animated AR bonsai tree because it’s visually appealing in your home and easier than looking after a real one.
But if we extend this notion, AR will also allow designers to ascribe practical function to beautiful objects – both AR and real. A sculpture at your front door reminds you of a colleague’s birthday, a candle holder signals a room’s ambient temperature, or a lamp displays the time. This is exactly the idea we explored with Project Bonsai, where we integrated live weather data from the Australian Bureau of Meteorology to animate a virtual bonsai tree.
The future state of AR will include content that is so seamlessly integrated with the real world, it will be difficult to distinguish between what is real and unreal.
AR represents a step change in human computer interaction into the world of spatial computing; the ability to view and manipulate data in 3D space. Thus, the ideal way we interact with that content should be natural and require little thought or training.
Lightweight glasses with integrated headphones and microphones, combined with voice and gesture recognition are tools that will support the design of these interfaces, allowing more natural user inputs to blend with natural AR outputs. For example. we can’t always use our hands or want to speak aloud, so designers need to understand context and accommodate users’ desire for handsfree interaction and privacy accordingly.
Today’s voice assistants are an excellent example of how good frictionless UI can feel. Initiating a request does not require pulling out a phone, finding the right app, opening it, navigating to the right spot and then typing a command; you just ask aloud “Hey Google, what’s the time in Paris?” Again, whether the user wants to (or can) speak, look, type or gesture to interact with AR content depends on contextual-awareness.
I once saw an AR demo that was designed to guide a technician in the use of equipment. It was being reviewed by a Vlogger who commented, “I’m not familiar with product X so I was a bit lost while I was using the app”. But a well designed AR experience (or any user experience for that matter) should be easy to use and understand. If the point of that app was to train new technicians who had no prior experience, then unfortunately it failed. It’s a good reminder that no matter how cool an experience may look or natural the UI is, designing for AR still requires a solid design approach. (I’ve included some augmented reality design resources here.)
Object permanence is a notion that was pioneered by developmental psychologist Jean Piaget. It describes the ability for children to understand that objects continue to exist even when they can’t be seen or sensed.
Through initial prototyping, we’ve come to realise that there are important implications for AR. Specifically, there may be certain contexts where it is beneficial for AR objects to behave like real world objects.
- AR objects can exist independently of the user. For example, the AR object could be anchored to a specific location in your home. When you leave the room, the object remains. When you return, it’s still in the same location. This may sound like a redundant feature but in fact, this ability facilitates user immersion; when AR objects behave like real objects, they seem more like real objects. I explored this idea through a mixed reality game design for the horror movie It Follows.
- AR objects can be anchored to a set of geo coordinates in the real world. If an AR object is anchored to a location in the real world and can only be accessed at that location, it’s inherently more interesting than just being able to access that content from our lounge room. Extending this idea, we could take specific context from that location (history, utilities, public services) and alter the content accordingly. We’re currently exploring location-based content with our current project, Chombo, placing AR experiences at specific geo coordinates in the real world.
- AR objects could travel with you. If an AR object is a utility you may chose to take it with you. It could be your voice assistant represented as a cute fluffy bunny avatar to a bonsai tree that displays the weather in your destination city.
Multiple users and unique instances
Following on from the notion of object permanence, users may wish to share their AR content with others. Depending on scarcity, how that AR object was created and whatever subscription models will dictate access to AR content in the future, that object may be a unique instance.
For example, when people play the AR game Pokémon Go they travel to specific locations in the real world to find and capture Pokémon (virtual pocket monsters). If two users are at the same location where a Pikachu Pokémon is available, they can both see a Pikachu and capture it: everyone get’s their own Pikachu.
But just like object permanence, adding a real world object property of “unique instance” to an AR object increases user immersion. Imagine if Pokémon Go had this feature. It means that when everyone descends on a specific location, there is only ONE Pikachu. Once it is captured, that’s it.
Although we have not yet experimented with this feature we have begun to incorporate
d it into future designs and we think there are amazing opportunities in this realm. If an AR experience has greater complexity (multiple objects, animation and audio effects) and is also a unique instance, it could create a much richer, shared experience for a large audience.
Take concerts and festivals, for example. People go to these events to participate in a shared experience. They all see and hear the same content at the same time in one location. If an AR dragon appeared on stage and then launched into the air to circle the stadium, everyone would be watching a unique instance, turning their heads in the same direction.
Adding user interaction to this paradigm, AR becomes even more interesting. If we’re all standing around an AR bonsai tree and I “blew” the leaves, everyone else would see them sway. Microsoft Hololens already has some of this functionality, allowing a design team to see the same object and make adjustments simultaneously.
Good design, in general, should take into account that everyone accesses content in a different way. For AR, designers and developers should consider whether there is a way for content to be provided in a multifaceted way, allowing users of varying levels of ability to access and enjoy as many aspects as possible.
A colleague recently visited the Dialogue in the Dark exhibition in Melbourne. Participants were taken through a sensory journey that occurred in total darkness. Removing one’s sight amplified the remaining senses and was a great reminder that truly immersive experiences consider all human senses.
One of the most exciting aspects of AR development is the ability to create multi-sensory experiences. The Voyant AR team have developed experiences that users can see, hear and feel through their smartphone’s screen, speaker and haptic feedback. But as AR hardware and software evolves, many exciting opportunities will begin to emerge. What if we could play on the idea of synesthesia and help people “see” a song or “hear” different shades of blue? I love the work of Luis Herman, who captured stunning images of wireless networks in the world around us, in his Spirit photography series.
The definitive manual for AR design has yet to to be written. And it certainly won’t be written by one person. But early stage experimenting, prototyping and sharing lessons learned will help build a community and repository for creating the most spectacular, engaging and life changing experiences the world has ever seen.