Protecting human rights in the face of rapidly changing technology and innovation

It’s not uncommon while scrolling through one’s social media feed to come across clickbait headlines and fear-inducing sound bites foreshadowing the imminent collapse of civilisation that will be caused by some type of emerging/disruptive technology like artificial intelligence, cryptocurrency, 3D printing, drones, robots or self driving cars.

Often absent, however, is a thoughtful and considered discussion around the development of these technologies and how to avoid worst-case scenarios. But where to begin? It would be almost impossible to create a set of rules or guidelines to encompass every single emerging technology, every possible iteration of that technology and the myriad of subsequent applications. A far simpler and practical approach is to focus on the real issue behind most of the fear: how do we protect ourselves from potential harm? To this end, the existing framework around human rights could provide an ideal place to start.

Human Rights and Technology

The Australian Human Rights Commission launched a human rights and technology project earlier this year which included the release of the “Human Rights and Technology Issues Paper”. This has been followed by nationwide consultation with invited industry experts. This week the roundtable discussion was held in Perth with representatives from the Australian Human Rights Commission including current Human Rights Commissioner, Edward Santow.

201811_Technology_Human_Rights_03

It was a pleasure to be invited as a representative from Voyant AR and participate in this discussion, exploring how technology can impact on human rights and what can be done to ensure that human rights remain protected in the future. Industry experts included industry, government and civic organisations with representatives from a wide range of sectors including artificial intelligence (AI), augmented reality (AR), computing, blockchain, legal, web accessibility, and disability services.

201811_Technology_Human_Rights_Perth

A central theme in our discussions was the idea of responsible innovation particularly in the areas of

  • AI in decision making
  • How people with disability can experience the benefits of new technology
  • How technology impacts specific groups such as children, women and older people and
  • How can a regulatory framework protect human rights without stifling creativity/innovation and also remain flexible enough to encompass rapid change.

201811_technology_human_rights_02.jpg

As an AR Designer who has studied Psychology and International Relations, I believe responsible innovation is critical in the design of user experiences for existing and emerging technologies. This includes consideration of a broader context of who might use that technology, how they are able (or unable) to access it and what the impact of their interaction might be.

Admittedly, this is not an easy approach to take. However, the release of the Human Rights and Technology Issues Paper is both timely and thought-provoking. In the face of fast-paced technological innovation and change, how can we ensure that fundamental human rights are protected?

Human rights and technology are inextricably intertwined

When most people think about human rights they may picture a foreign country, very different from their own, with extreme human rights violations such as human trafficking, unlawful detainment, or ethnic cleansing.

But human rights are very much a part of everyday living. It may not seem obvious at first, but whenever technology can direct, guide or alter human behaviour, there is a potential impact on our human rights.

  • Right to equity and non-discrimination. Are people discriminated against based on their race, colour, religion, sex, secuality or disability? Some dating app and websites have been accused of allowing racial profiling.
  • Freedom of expression. How do we balance open communication against the dissemination of hate speech and the proliferation of fake news?
  • Right to benefit from scientific progress. Can all sectors of the community access and use new technology? Barriers can include cost, regional location, language, cognitive ability and access to the internet.
  • Freedom from violence. How could a technology facilitate or incite violence and abuse? What design improvements or safeguards can be put in place to protect users?
  • Accessibility. Can people afford or access the technology required to fulfill basic tasks like pay bills or apply for a job? How can we make technologies and processes as inclusive as possible?
  • National security/counter-tourism. How do we balance security and against threats, without infringing on human other rights such as privacy?
  • Right to privacy. The current debate on opting out of My Health Record demonstrates the confusion Australians have around the type of data the government wishes to retain, how it will be used in the future and our right to medical privacy
  • Access to information and safety for children. How can children benefit from technology while protected from exploitation? From sharing photos and personal information to long term impact on emotional development.
  • Right to fair trial and procedural fairness. How could AI be used in decision making during the judicial process? Is there still a possibility of gender and race bias in AI?

Our seemingly benign every day interaction with technology – like checking email or scrolling through Facebook – is actually the product of millions of choices made during ideation, design, prototyping, iteration, development and deployment. And each choice has the potential to erode human rights or to strengthen and protect them.

The Human Rights Issues Paper includes a quote from the current Australian Human Rights Commissioner that neatly sums up the idea of responsible innovation, “Technology should exist to serve humanity. Whether it does will depend on how it is deployed, by whom and to what end.”

Responsible innovation needs to be taken seriously by anyone working in technology and initiatives like the government’s Human Rights and Technology project should be applauded and supported. We may fear robots, AI and fake news but ultimately it’s people who are building them.

Taking Art Out of the Gallery

Last weekend I had the privilege of participating on a panel as part of the 2018 Disrupted Festival of Ideas.

Facilitated by Jeremy Smith from the Australia Council of Arts, the panel also included experimental artist JR Brennan, and Director/Writer Zoe Pepper who has worked in theater, film and participatory gaming.

I thoroughly enjoyed the conversation around the nature of art and context, as well as the opportunity to discuss augmented reality and location-based content. The event was livestreamed and recorded below (video number six in the playlist).

Maps, layers of meaning and location-based content

I’ve always been interested in maps. My parents were immigrants to Australia and for most of my childhood it was just my parents, sister and me. But I still felt connected to my extended family overseas. I remember lying on my bedroom floor with an atlas looking at all the places where we had family. And because I’d been fortunate enough to visit most of them, those little dots and roads and rivers were all imbued with memories.

In a few years, I began to appreciate that there was more to maps than just the end product. That maps also spoke about who created them. When I was in grade 3, we learned about geography. I asked a teacher about the lines on the map (the political boundaries) and whether they could change over time. She misinterpreted my question and laughed, saying the lines weren’t really there in real life. But they are. Someone dictates where one state ends and another begins. It also made me wonder, who made maps and how did they decide what information was included, and what stayed out?

As I grew older and began to travel on my own, maps took on a much more utilitarian role. But I still appreciated the hidden layers that existed: between the map maker, the map end-user and the location itself. So much so that during my travels, I’ve collected many maps of the places I’ve visited. Some quirky, some interesting but all providing a different account of that place compared to the usual tourist guides.

“A free map for young travellers made by locals”, Brussels, Belgium

In 2010 I travelled to Brussels via the Eurostar lured by the Comics Museum, art, fries and waffles (in no particular order). Not long after arriving I managed to find this fabulous map made by locals.

Brussels01

Brussels2

IMG_6359

When most people travel they want to go where the locals go. Hopefully avoiding expensive tourist traps while also experiencing the best aspects of a place.

Things I love about this map:

– Despite being for “young travellers” it’s inclusive of different ages, background and interests. There are tips from a 16 year old on the best 360 degree view, another from a 65yr old on where to get good Belgian food, and where a gay bar area is located with free WiFi.

– They use direct quotes from locals which makes the map feel authentic, friendly and personal.

– This tip: “Be yourself, especially if you’re weird. Acting cool may work in Paris, but not in Brussels.”

– And this one: “Drink the real sour Gueuze beer. Tourists say it tastes like vomiting beer instead of drinking it, but they don’t know anything.”

Maps can be fun and informative but sometimes they can have a surprisingly touching effect on people.

“Nancy Chandler’s Map”, Chutachuk Markets, Bangkok, Thailand

There was a period of time when I had numerous trips to Bangkok. I’d visit for a few weeks at a time and would always try to squeeze in a visit to the famous Chutachuk Markets. Promoted as the world’s largest weekend market, it has everything from fresh produce and homewares to antiques and live animals. I loved exploring the stalls and getting lost among the countless rows. But sometimes I just wanted to get back to particular spot – like that cafe with amazing ice coffee. My saviour was Nancy Chandler’s Map of Bangkok which included an incredibly detailed and accurate layout of the markets.

NC_map

bk28-central-bangkok_orig

It was not only wonderfully designed with bright colours and illustrations but also a delightful user experience with gifted insight into the locations themselves. It’s tone of voice and loving attention to detail made you feel like you were truly getting a local’s insight. And there were no ads – every detail had been personally selected by Nancy Chandler. I felt like I was borrowing a friend’s personal map.

For the markets, there were essential locations (public transport, ATMs, toilets), custom routes (follow the orange path for the vegetarian market), cultural advice (it gets hot so drink lots of water, bargain politely) as well as fun ones (a secret escape route!). I even started scribbling my own notes on the map, a layer that documented my personal connection and context with that place.

IMG_6361

Nancy was an artist and mapmaker originally from California who lived in Thailand for 18 years. Her unique background turned pure data (locations, facts, points of interest) into a unique experience for many travellers like me. While writing this article I discovered that she passed away in 2015 and felt sad about someone I had never met and only encountered through a map. It seemed strange but I was not alone after reading comments left on her website.

While I never met you personally, I’ve enjoyed your maps, coloring books, greeting cards, and other colorful items for years. You’re a legend in Thailand, one that will be remembered and your work enjoyed for many years to come.”

“We’re very sad to hear of Nancy’s death. We didn’t personally know Nancy but on just learning of her death now I have tears in my eyes. We felt her presence there with us in Thailand as we used her maps over the years. What a wonderful contribution she’s made to many people by her great work. We’ve given Nancy’s wonderful maps to many people as gifts over the years and usually pass them onto other tourists in Thailand as we finish each trip.”

“Thank you to the whole Nancy Chandler Maps family for being a burst of color in my life at a major fork in the road a few years ago. When it comes to choosing paths, I’ll always follow Nancy’s.”

These are more than maps. They are a unique set of journeys, feelings and memories attached to specific locations. In this context a map has become a vehicle for communication, messages passed on from one person to another, as a guide to experiencing the world around them and seeing more than meets the eye. 

And the essence of that experience shouldn’t change even when maps are consumed in different formats from paper to digital or beyond.

Augmented reality and location-based content

Augmented reality places a layer of digital content over the physical environment. In most cases, it doesn’t matter where you are, the experience is always the same. For example, using Snapchat’s filters to change the way you look. 

But there’s a unique opportunity in adapting augmented reality experiences based on the user’s current location. Pokémon Go is one of the few apps currently available that adapts augmented reality content based on the player’s location. But there’s much more exciting applications in this space.

A single location often serves many different functions depending on who you are. A museum is not just for history fans but also a place of research, a cafe for morning tea, a tourist meeting point, a fun kids’ activity during the holidays and a place to shelter while it rains.

Augmented reality is digital content. It’s content that we can map over physical spaces to create layers of meaning, sharing information to enhance the way a person interacts with a specific location. It’s a big idea and something that the team at Voyant AR are passionate about with a long term vision. We’re currently working on a location-based app which will be launched to beta very soon.

The world is more than dots and roads and rivers, and augmented reality is much, much more than pocket monsters, face filters and animojis.

It’s about layers of meaning and how you choose to see the world around you.

It’s about augmenting your own reality.

Designing for the future state of AR

The team at Voyant AR have had a lot of fun this year developing and experimenting with AR experiences. It’s made us reflect on how different it is from other digital mediums such as web, mobile and even VR. While VR and AR development may share common tools and frameworks, the process for designing and understanding user experience is VERY different.

Working through various prototypes, we kept note of important issues or surprising discoveries, with the goal of creating an internal design playbook.  Our work also stimulated lots of discussion about where AR is headed, what the future may look like and what impact it will have on daily life. Here are the common themes based on findings we have discovered so far and potential implications for designing future AR experiences.

AR devices will soon replace today’s computers

Let that sink in for a moment. We won’t need a smartphone, laptop or desktop computer. All we’ll need is a pair of AR glasses. Because those glasses will be your computer. You’ll wear them all day, every day. 

Designers need to start thinking about the future state of AR – now. We may not yet have all the software and hardware required to make an AR experience as immersive and frictionless as we’d like it to be, but we can imagine and design for that future, then build something today that’s as close as we can get. We’ll have created the foundation for that future experience and as the building blocks and tools become available we’ll be able to adapt quickly. The advantage that our team has found with this approach is that we’re already thinking of future use cases and how we might solve them, even if we can’t do that today.

Assume you can, until you can’t

At this point, designing for AR is a bit like the Wild West. There are no definitive guides, best practices or agreed upon approaches on how to best design for AR. As designers we’re truly blessed to be part of this new era. It’s a once in a lifetime opportunity, to be given the chance to play, experiment and explore this amazing new medium and help shape what’s to come.

When there are so many unknowns with a new technology, there is a tendency to start with creating something easy or simple, just to test things out initially and then slowly build on that. But in setting the bar so low, so early – you’ve already limited what’s possible. Besides, how do you really know what will be “easy” or “simple” to create such a thing?

We need to alter that mindset. When it comes to designing for AR – think big! Then think bigger! The shift towards spatial computing is such a profound transformation that it will impact every aspect of our lives. We don’t know, what we don’t know. Our current perspective (and subconscious bias) has been shaped by previous experience designing for web, mobile, social and VR. Some practices might be applicable to AR, but most won’t. So imagine what will be possible – what you want to be possible – and then build it.

To expand a team’s perspective, it helps to include subject-matter experts or users who have little technical expertise to provide feedback on the design and prototyping phase. This is good practice for designing user experience in general, but it’s especially helpful for AR. We have found that their feedback and suggestions aren’t as likely to be influenced by prior experience or knowledge. Rather, they’re focused on what makes a good experience and what they’d like to see.

AR “apps” won’t be a thing

Most AR content today is available via an app. But the current “app paradigm” which underpins the distribution and consumption of mobile content will NOT be part of the future state of AR. It is far more likely that AR content will be distributed via layers or filters in which people can tune in or out as they please. (Check out this article by Matt Miesnieks on Why the YouTube of AR won’t be YouTube.)

Today’s mobile apps operate in silos. You open, interact, then minimise/close them, before opening another. You might feel like you’re multi-tasking with several apps at once but you’re just swapping rapidly between them. True integration would be like playing Robot Unicorn Attacks in your email inbox. Voice assistants (like Google Home, Amazon Echo) are the closest we have today to that type of seamless integration. Behind the scenes, the system is accessing different apps but what the user perceives is a single easy-to-use interface.

Future AR content, just like voice assistants, will have an “always on” state. That content will live in your environment, appearing and changing based on your behaviour, explicit commands and pre-programmed contextual-awareness (more on this in a future blog post). 

What does that mean when designing for future AR experiences? Context becomes king. Where are they and what are they doing? What is going on around the user? Get your hands on any tools that allow you to start exploring this now. We’re already looking at integrating location based data (Project Chombo), natural language processing and machine learning (Project Evan) to enhance user experience based on contextual awareness.

Although it may not have too much impact for now, it will be important to consider how your AR content may interact (or not) with other AR layers. For example, imagine your AR content is an 80’s layer that transports the user back to 1988 by superimposing retro content over real world advertising (a concept that Julian Oliver explored with Artvertiser in 2008, transposing art over ads) while playing a radio feed from that era. That AR content layer might include a plugin that applies an “80’s style filter” to communication tools like email, chat or voice calls.

 

Aesthetic meets functionality

If AR content is “always on”, it makes sense that it should be displayed or conveyed in an aesthetically pleasing manner. One might want an animated AR bonsai tree because it’s visually appealing in your home and easier than looking after a real one. 

But if we extend this notion, AR will also allow designers to ascribe practical function to beautiful objects – both AR and real. A sculpture at your front door reminds you of a colleague’s birthday, a candle holder signals a room’s ambient temperature, or a lamp displays the time. This is exactly the idea we explored with Project Bonsai, where we integrated live weather data from the Australian Bureau of Meteorology to animate a virtual bonsai tree.

ProjectBonsai.jpg

Frictionless UI

The future state of AR will include content that is so seamlessly integrated with the real world, it will be difficult to distinguish between what is real and unreal.

AR represents a step change in human computer interaction into the world of spatial computing; the ability to view and manipulate data in 3D space. Thus, the ideal way we interact with that content should be natural and require little thought or training.

Lightweight glasses with integrated headphones and microphones, combined with voice and gesture recognition are tools that will support the design of these interfaces, allowing more natural user inputs to blend with natural AR outputs. For example. we can’t always use our hands or want to speak aloud, so designers need to understand context and accommodate users’ desire for handsfree interaction and privacy accordingly.

Today’s voice assistants are an excellent example of how good frictionless UI can feel. Initiating a request does not require pulling out a phone, finding the right app, opening it, navigating to the right spot and then typing a command; you just ask aloud “Hey Google, what’s the time in Paris?” Again, whether the user wants to (or can) speak, look, type or gesture to interact with AR content depends on contextual-awareness.

I once saw an AR demo that was designed to guide a technician in the use of equipment. It was being reviewed by a Vlogger who commented, “I’m not familiar with product X so I was a bit lost while I was using the app”. But a well designed AR experience (or any user experience for that matter) should be easy to use and understand. If the point of that app was to train new technicians who had no prior experience, then unfortunately it failed. It’s a good reminder that no matter how cool an experience may look or natural the UI is, designing for AR still requires a solid design approach. (I’ve included some augmented reality design resources here.)

Object permanence

Object permanence is a notion that was pioneered by developmental psychologist Jean Piaget. It describes the ability for children to understand that objects continue to exist even when they can’t be seen or sensed. 

Through initial prototyping, we’ve come to realise that there are important implications for AR. Specifically, there may be certain contexts where it is beneficial for AR objects to behave like real world objects.

  • AR objects can exist independently of the user. For example, the AR object could be anchored to a specific location in your home. When you leave the room, the object remains. When you return, it’s still in the same location. This may sound like a redundant feature but in fact, this ability facilitates user immersion; when AR objects behave like real objects, they seem more like real objects. I explored this idea through a mixed reality game design for the horror movie It Follows
  • AR objects can be anchored to a set of geo coordinates in the real world. If an AR object is anchored to a location in the real world and can only be accessed at that location, it’s inherently more interesting than just being able to access that content from our lounge room. Extending this idea, we could take specific context from that location (history, utilities, public services) and alter the content accordingly. We’re currently exploring location-based content with our current project, Chombo, placing AR experiences at specific geo coordinates in the real world.
  • AR objects could travel with you. If an AR object is a utility you may chose to take it with you. It could be your voice assistant represented as a cute fluffy bunny avatar to a bonsai tree that displays the weather in your destination city.

Multiple users and unique instances

Following on from the notion of object permanence, users may wish to share their AR content with others. Depending on scarcity, how that AR object was created and whatever subscription models will dictate access to AR content in the future, that object may be a unique instance. 

For example, when people play the AR game Pokémon Go they travel to specific locations in the real world to find and capture Pokémon (virtual pocket monsters). If two users are at the same location where a Pikachu Pokémon is available, they can both see a Pikachu and capture it: everyone get’s their own Pikachu.

But just like object permanence, adding a real world object property of “unique instance” to an AR object increases user immersion. Imagine if Pokémon Go had this feature. It means that when everyone descends on a specific location, there is only ONE Pikachu. Once it is captured, that’s it.

Although we have not yet experimented with this feature we have begun to incorporated it into future designs and we think there are amazing opportunities in this realm. If an AR experience has greater complexity (multiple objects, animation and audio effects) and is also a unique instance, it could create a much richer, shared experience for a large audience.

Take concerts and festivals, for example. People go to these events to participate in a shared experience. They all see and hear the same content at the same time in one location. If an AR dragon appeared on stage and then launched into the air to circle the stadium, everyone would be watching a unique instance, turning their heads in the same direction. 

Adding user interaction to this paradigm, AR becomes even more interesting. If we’re all standing around an AR bonsai tree and I “blew” the leaves, everyone else would see them sway. Microsoft Hololens already has some of this functionality, allowing a design team to see the same object and make adjustments simultaneously.

Accessibility

Good design, in general, should take into account that everyone accesses content in a different way. For AR, designers and developers should consider whether there is a way for content to be provided in a multifaceted way, allowing users of varying levels of ability to access and enjoy as many aspects as possible.

A colleague recently visited the Dialogue in the Dark exhibition in Melbourne. Participants were taken through a sensory journey that occurred in total darkness. Removing one’s sight amplified the remaining senses and was a great reminder that truly immersive experiences consider all human senses.

One of the most exciting aspects of AR development is the ability to create multi-sensory experiences. The Voyant AR team have developed experiences that users can see, hear and feel through their smartphone’s screen, speaker and haptic feedback. But as AR hardware and software evolves, many exciting opportunities will begin to emerge. What if we could play on the idea of synesthesia and help people “see” a song or “hear” different shades of blue? I love the work of Luis Herman, who captured stunning images of wireless networks in the world around us, in his Spirit photography series.

 

Luis_Hernan_Architecture-entrance
Luis Hernan, Spirit Photography Series.

The definitive manual for AR design has yet to to be written. And it certainly won’t be written by one person. But early stage experimenting, prototyping and sharing lessons learned will help build a community and repository for creating the most spectacular, engaging and life changing experiences the world has ever seen.

Augmented Reality Design Resources

Many people have asked how I design for augmented reality. This is still a very new area and future experts will come from very diverse backgrounds bringing something unique from their perspective.

My professional background is a colourful mix of psychology, anthropology, copywriting, marketing, digital content production, international aid work, and general sci-fi geekiness. In other words, I’m a self-taught Augmented Reality Designer.

I’ve listed below some references that I have personally found valuable. I’ll revisit this page from time to time and update with new resources. If you know of any other useful material, let me know.

ONLINE

Apple. Human Interface Guidelines.

Apple. Human Interface Guidelines for Augmented Reality.

Google. Best Practices for Mobile AR Design.

Leap Motion. Designing Intuitive Interactions.

Microsoft. Windows Mixed Reality Design.

NON-FICTION

Very few of the titles I have listed below, refer explicitly to AR/MR/VR/XR.

Most topics are around general design or technology design because I believe that fundamental design principles are still applicable even for emerging technology. Moreover, it’s important to at least revisit and consider existing paradigms, before attempting to redesign the future (or reinvent the wheel).

I’ve also listed a few other seemingly random topics because they’re actually very relevant:

  • Game design principles for understanding user behaviour, game mechanics and reward systems as well as navigating and displaying complex structures of information.
  • Mapping and cartography, especially navigation, exploration and location based content.

Brathwaite

Brathwaite, Brenda and Schreiber, Ian. (2008). Challenges for Game Designers. Cengage Learning.

Coates.jpg

Coates, Nigel. (2003) Ecstacity. Laurence King Publishing.

Follett

Follett, Jonathan (editor). (2014) Designing for Emerging Technologies. O’Reilly Media.

GameDesignWorkshop

Fullerton, Tracy. (2010) Game Design Workshop: A Playcentric Approach to Creating Innovative Games. Taylor & Francis Ltd.

Harston

Hartson, Rex. (2012) The UX Book: Process and Guidelines for Ensuring a Quality User Experience. Elsevier Science & Technology.

Map_Phaidon

Hessler, John. (2015) Map: Exploring the World. Phaidon Press.

DesignForHackers

Kadavy, David. (2011) Design for Hackers. John Wiley & Sons.

Koster

Koster, Ray. (2014). A Theory of Fun. O’Reilly Media.

Krug

Krug, Steve. (2014) Don’t Make Me Think. Pearson Education.

Norman_Don

Norman, Don. (2013) The Design of Everyday Things. The Perseus Books Group.

Oconnell

O’Connell, Kharis. (2016) Designing for Mixed Reality. O’Reilly Media. (Free ebook.)

Voice

Pearl, Cathy. (2016) Designing Voice User Interfaces: Principles of Conversational Experiences. O’Reilly Media.

Schell

Schell, Jesse. (2008) The Art of Game Design: A Book of Lenses. Taylor & Francis Inc.

ShelIsrael

Scoble, Robert and Israel, Shel. (2016) The Fourth Transformation: How Augmented Reality & Artificial Intelligence Will Change Everything. Patrick Brewster Press.

DesignInterface

Tidwell, Jennifer (2015). Designing Interfaces. O’Reilly Media.

FICTION

Last but definitely not least, if you’re going to design the future I strongly recommend that you learn about visions of the future from people who have already imagined it: science fiction authors. Novels are so much better than movies because writers aren’t restricted by special effects and CGI – they build worlds with words. And the very best imagine not only future technology but how people will use it and it’s impact on society. These are some of my favourite authors.

  • Douglas Adams
  • Margaret Atwood
  • Ray Bradbury
  • Philip K Dick
  • Cory Doctorow
  • William Gibson
  • Aldous Huxley
  • Richard Morgan
  • George Orwell
  • Alistair Reynolds
  • Mary Shelley
  • Neal Stephenson
  • Charles Stross
  • Tad Williams

Taking hold of the future: gesture interface design for mixed reality

I’ve been interested in gesture/touchless user interfaces for a while. Especially the use of hand gestures and body movement as inputs to interact with computer programs. The first time I can remember seeing an actual demonstration of this on the big screen was Johnny Mnemonic. The manipulation of virtual displays and data in virtual reality using interactive gloves was so futuristic… even in 1995.

JohnnyMneunomic

But it wasn’t until 2002, when Tom Cruise donned a pair of gloves in Minority Report, that the public really got a feel (pardon the pun) for how our bodies and natural movements could be used to interact with virtual content.

MR05

Since then, science fiction films have provided many more examples of how gestural interfaces could enhance work, play and retail in the future. A few of these examples include…

Lawnmower Man (1992) – earlier than Johnny Mnenomic but I didn’t get around to watching in until much later…

LawnmowerMan

Firefly TV series (2002)…

Firefly

Iron Man (2008)…

IM03

Prometheus (2012)…

Prometheus

… and more recently Ghost in the Shell (2017).

GITS07

These visions are a great source of inspiration for gesture design in the augmented and mixed reality space. As a visual prototype they explore:

  • Physical interactions with digital/virtual content.
  • Use of hardware and peripheral accessories for input and output (eyewear, gloves, helmets, body suits).
  • Methods for accessing digital content (retrieve, display, navigate, share, archive).
  • Use cases (how different users might interact with the same system).
  • Potential constraints (spatial, security, permissions).

However, the designers and art directors who create these amazing visions of the future are designing deliberately for the big screen. That is, their designs are more of a storytelling tool than a true user interface.

For example, Jayse Hansen is a creative art director who specialises in what he calls FUI (Fake User Interfaces). He has worked on many films including Iron Man, Avengers, Hunger Games, Guardians of the Galaxy and Star Wars. He explains the difference between designing an FUI versus a real UI in an interview with The Next Web.

IronMan

You have to allow yourself the freedom to go far outside what’s ‘real’ and enter into something that’s more fantastical. A lot of the time, with film UI’s, you’re attempting to show what’s going on behind the screen; to show graphically what the computer is doing. Whereas, with real UI’s, you’re usually attempting to hide it. So allowing for non-real, non-usable creativity is essential to a good story-telling UI. Someone once called it an NUI — or non-usable-interface  — and I kind of liked that. You do have to break some usability rules to make it dynamic enough.

It’s great to get inspiration from film, but we must acknowledge that these mixed reality designs are not necessarily going to work in real life unless they have taken into account real use cases.

A more in depth yet still fictional exploration of gesture design can be found in Heavy Rain, a video game released in 2010. The main character used augmented reality to access virtual data in both real world and virtual environments. (Note: floating white symbols denote game control interface while orange symbols, objects and text represent AR assets).

Augmented reality demonstration starts at 05:20.

Gesture design and user input

Several years ago I set out to learn more about gesture design. But I wanted to start with the basics of user input. I wanted to understand the fundamentals of how hands and fingers communicate in 3D space. So while living in Singapore I took a beginner’s class in sign language at the Singapore School for the Deaf.

I’ve always found sign language fascinating. As a form of communication without spoken or written words, it seemed to me like a secret world of coded messages relying on swift yet subtle hand movements. I thought that if I could  convince someone else to learn sign language with me, we would be able to exchange all kinds of secret messages, even in the presence of others. But it turned about to be a lot tougher than I anticipated.

Firstly, my fingers were exhausted by the end of the first lesson. Making those gestures and switching between them quickly, reminded me of learning to play a musical instrument. I was surprised by how quickly my hands and fingers began to physically fatigue.

Some alphabet signs. “Signing Exact English” by Gerilee Gustason and Esther Zawolkow. Illustrations by Lilian Lopez. 1993, Modern Signs Press.

Secondly, as anyone who has learned a second language knows from experience, learning how to express yourself is only half the exercise. You must also learn to understand someone else and what they’re trying to communicate. In sign language, you quickly learn to recognise the same hand gestures and movements when they are formed by another person. You also learn to appreciate the subtly of an incorrectly positioned thumb or overlapping finger. I also learned that good sign language is just like any other form of communication: to prevent misunderstandings you must be clear and direct with your expression.

Learning all those hand positions required my brain to rewire in what seemed like a really unnatural way – at first. You can almost feel your brain trying to “talk” to your fingers. I had to exercise my procedural memory, forcing my muscles to form, practice and recall those shapes and movements.

I also realised that an important part of sign language isn’t even about your hands. If you’re signing to say that you don’t feel well, you have to express that in your face with a sad or pained expression. If you think about it, congruence between words and expression is important even in communication between two non-signing people. For example, if I said “I’m really upset with you” but smiled as I spoke.

Even though it was difficult, I also avoided trying to “translate” each sign through my brain in English, but tried to associate the concept/noun/verb directly with the hand gesture.

I learned that there’s a compound effect in sign language as you add each additional element such as movement, another hand and/or facial expression:

  • One hand: shape
  • One hand: shape + movement
  • Two hands: shape + movement + interaction between hands
  • Two hands and face: shape + movement + interaction + facial expressions

Although a beginner’s class was only just a taste of a rich and varied language, I learned a lot about user experience, user input, gestures and how to sign Tyrannosaurus Rex. But what about software and devices that can “translate” our gestures?

Leap Motion

Launched in 2012, Leap Motion is a sensor device that recognises hand gestures without physical touch and transforms these into user inputs.

LeapMotion

For example, the software can not only recognise hand and finger locations or movement but it also recognises the discrete motion of a finger tracing a circle in space as a Circle gesture.

Leap_Gesture_Circle
Circle gesture via Leap Motion.

The Leap Motion device connects to a computer via USB and has more recently been used by clever developers to track hand gestures and recognise user inputs in virtual reality. Not surprisingly Leap Motion have since created a VR-specific developer kit. This opens the possibility of registering user inputs beyond the native controllers specific to each VR platform.

Leap Motion Orion Blocks Demo
Image via Leap Motion.

The company has also developed a Leap Motion Interaction Engine, a layer which operates between the Unity game engine and real-world hand physics, also publishing a nifty guide on their principles of interaction design.

leap-motion-interaction-design.jpg

Microsoft Hololens Design Guide

Microsoft has published a design guide for Hololens which includes designing gesture inputs. The system only recognises a few gestures: Ready Mode, Air Tap (similar to a mouse click), Air Tap and Hold (similar to mouse click and hold/drag), and Bloom (return Home). But when used in conjunction with user gaze and voice commands, these gestures become more powerful.

What does the future of gesture hold for mixed reality?

Tango is Google’s AR platform. From what I’ve seen in demos, it’s a very powerful platform but only available on two mobile devices: Lenovo Phab 2 Pro and Asus Zenfone AR. But today marked an interesting turning point in Google’s AR journey. Google announced ARCore, a baked-in augmented reality platform for Android developers. This is obviously Google’s response to the hype around Apple’s ARKit which has fuelled the imagination of developers around the world. However both ARKit and ARCore (to the best of my knowledge) rely on touch inputs via the mobile screen. But therein lies an user interaction issue.

While using your mobile phone, you aren’t handsfree. You’ll need at least one hand to hold it. So now you’re reduced to touch inputs via one hand, while trying to position the phone with your other hand so you can see the real world through the phone’s camera.

Even if you could strap your phone to your head (à la Google Cardboard) freeing both hands AND if your phone could recognise hand gestures in the real world, your phone would still only be able to recognise gestures that are within the camera’s field of view.

 

 

Wired_iphone_field
“Measuring the Field of View for the iPhone 6 Camera” Wired.com 15 May 2015.

So you’re back at the same problem that Hololens and Meta face: if it can’t see your hands, it won’t register the inputs.

But Google, being Google, have also been busy experimenting with touchless gestures as the ultimate in future user inputs.

Project Soli is one such experiment. Created by Google’s Advanced Technology and Projects group (ATAP) the team have investigated the use of miniature radar to detect gesture interactions without touch. The subtle fine movements that the system is able to recognise and respond to, is really quite beautiful to watch.

Also from Google’s ATAP team is Project Jacquard. In this video, we see Ivan Poupyrev , describe how they created a new interactive system by weaving the technology into the fabric itself. As he says, “If you can hide or weave interactivity and input devices into the materials that will be the first step to making computers and computing invisibly integrated into objects, materials, and clothing”.

Can you imagine clothing that recognises simple touch gestures?

In a future where everyone wears a mixed reality headset, this type of technology would certainly help address user fatigue that comes from waving their hands in the air all the time. The user could assume a natural posture and place their hands where it feels comfortable and natural. For example, while sitting on a chair it’s natural to rest your hands on your thighs or fold your arms. All you’d have to do then, is brush your jacket sleeve or tap your knee to open an app.

Brushing fabric is a very gentle, quiet and subtle interaction. I may be a technophile but I do cringe at the thought of using Hololens in a public place. Interacting with mixed reality content is uber cool but when you’re the only one who can see that content, you look…. well, kinda weird.

JamesMackie
Microsoft Hololens Review by James Mackie.

A summary of UX considerations

It’s important to note that while gesture input is great, it still has some drawbacks.

  • Fatigue. As mentioned previously, user fatigue is probably the biggest issue. Your hands make small intricate movements which are repeated over time. Your arms also get tired from making large gestures or just maintaining your hands position in the air. This is an important lesson for MXR gesture design. We must consider the user and their ongoing comfort when interacting with our system.
  • Lack of physical feedback. Unlike pressing a button in real life, there is no haptic/tactile feedback to let the user know that a button has indeed been pressed. I imagine that physical resistance would be important in use cases such as a surgeon conducting remote surgery via a robot. This is where haptic gloves or other body wear that provides the user with physical feedback, would still be useful and preferred.
  • Complexity. I wouldn’t advise anyone design a user interface that relies on too many different finger or hand gestures. Keeping it simple helps everyone: designers, developers and the end user. But simple doesn’t mean limited. Think about the traditional computer mouse with only 2-button click and scroll functionality. Combined with software that indicates where and when to click on a website, provides a much greater range of possible interactions.
  • Coordination. It can also be difficult for the user to initially coordinate movements if they’re manipulating virtual objects. For example, when I played around with a Zapbox demo I found it tricky to move my hand through z space and “find” the point at which a real object intersected with a virtual one.

The recent developments in this space are very exciting. There’s a lot to consider from a UX point of view when designing mixed reality experiences. As science fiction author Arthur C. Clarke famously once said, “Any sufficiently advanced technology is indistinguishable from magic.” As the future of mainstream mixed reality comes closer, so too does the ability for gestural interfaces to become more like… magic. Who knows, I might one day be able to summon a cloud just as my child hero Monkey once did.