The way a user looks while using technology is crucial. Not just in an aesthetic “I want to look cool” kind of way but for most mainstream technology users, more of an “I don’t want to look like an idiot” way. The way a user feels and how they think they are perceived by others determines whether they feel comfortable while using a piece of technology and ultimately, whether they continue using it.
Additionally, a user’s appearance can also communicate messages to the people around them. These messages may be deliberate or unintended. In this post I explore how user experience in mixed reality should not only consider the primary user but also secondary users.
An early insight into augmented reality user experience issues
Ah Google Glass. You were so exciting at the time, but quickly fell victim to the social equivalent of a pitchfork mob hunting down Frankenstein’s monster.
Early adopters of Google Glass were nicknamed “glassholes” (I’ll be polite and refer to them as Glassers) by members of the public who encountered them. Although I never came in direct contact with someone using Glass I believe the hostility towards these users wasn’t due solely to their behaviour but perhaps their “perceived” actions in two ways.
Firstly, from what I have read about people’s experiences when engaging with a Glasser, they felt that Glassers encroached on their private space by recording them without their permission. Secondly, people felt unsettled and annoyed when engaged in conversation because the Glasser’s gaze darted to the small display screen in the corner of their field of view. Ironically, what people may have experienced were two extremes of the same spectrum.
On one hand they may have felt as though they were being probed and examined without their consent, knowing that once recorded, their version of “self” could be re-probed and re-examined infinitum for any number of flaws and weaknesses. On the other hand, their ego may have been bruised during the interaction because they didn’t command the Glasser’s full attention. Many couples have arguments that start simply because their partner does not appear to be really listening (although they can successfully repeat the last thing that was said). So privacy and social interaction norms appear to be two critical issues, not for the user herself but the people around her. That is, secondary users.
This is really interesting food for thought. Currently, user experience design for mobile technology focuses predominantly on the user. But I wonder how many designs consider (either implicitly or explicitly) the experience of their hardware or software on people with whom the user interacts.
Recording and privacy
When smartphones were first designed with cameras, the “red recording light” feature that was commonplace in most video recording devices at that time, was omitted. Thus, the only signal that indicated a device was in “recording mode” was removed.
This omission has been applied to all smartphones (as far as I’m aware) and widely accepted by people with a smartphone and with what seems to me, little objection. This design default worked against Google Glass when it was released. This is because it’s hardware design included three signals to indicate when video recording was in progress:
- Illumination. The device’s screen was illuminated whenever it was in use, including recording video or a photo. Unfortunately this created the impression that the primary user was recording all the time, even when they were not.
- Voice command. The primary user could speak a command – “Ok Glass, record a video” – to commence recording. However, if a secondary user was not present at the time the command was executed, they would not be aware that a recording had commenced.
- Gesture command. The primary user could press a button on the Glass’s frame to commence recording. As per the previous signal, a secondary user had to witness this gesture to know a recording had commenced.
The illumination signal is the strongest cue to secondary users that a video recording is in progress. This is because it taps into a design pattern that is familiar to most secondary users of a certain age (the red light of an old fashioned video recorder.)
So what can we learn about secondary users’ experience from the demise of Google Glass? People desire, and have the right, to know when they are being recorded and with their consent. Smartphones have largely avoided this issue because they don’t advertise when they are in recording mode. Of course, simply holding your phone at a particular angle for a length of time has become the well known posture of someone taking a photo or recording.
Currently the power of recording lies in the hand of the recorder. But what if we could reverse this situation? What if the subject could control when and how they were recorded? Or whether they were recorded at all?
Charlie Brooker’s Black Mirror series has provided deliciously entertaining (and chilling) examples of near future scenarios where modern day technology has been taken to extremes. In the “White Christmas” episode, users with enhanced augmented reality vision could block the image and sound of selected people from their field of view (replaced with a pixelated form as per the image below). In turn, those selected people could not see or hear the user. If both viewer and subject use the same mixed reality hardware/ software/ platform/ network then the subject can “cloak” themselves from recording.
This Black Mirror example demonstrates a scenario between two people, a “one-to-one” context. However, this could also become a default setting in a “one-to-many” context. For example, when the user is in a public space they are hidden by the cloaking effect but effect is deactivated when visiting a friend’s house.
While taking group wedding photos outdoors in summer, photographers in Australia often request that guests remove their sunglasses. Perhaps a future request will be to “deactivate your mixed reality privacy cloaks” granting the bridal couple permission to record their image.
This has an interesting impact on how we might record and reflect on future historical events. Or our memories of past events. Will our future holiday photos look like this?
The above image is from the same Black Mirror episode. A user, found guilty of a crime, is punished via his augmented reality enhanced vision: he can no longer see nor hear other people for the duration of his sentence. An enforced digital isolation. But if private companies own and manage these mixed reality networks and associated content management systems, will government or law enforcement bodies have the power (or right) to police, intervene and control the content streams of its citizens?
Social interaction norms between primary and secondary users
Take a look at this picture.
Would you feel comfortable having a casual chat with this guy? Perhaps not. It reminds me of talking to someone while they’re wearing sunglasses indoors. Both situations feel unsettling. Why?
- Trust. His eyes are partially hidden by the headset (I’m not aware of how occluded a user’s eyes may become when viewing content along the spectrum from augmented to full virtual reality content). This means you can’t tell where he is looking and research has shown that people rely on predictive gaze cues as a way of judging whether a person is trustworthy. Engaging in a simple conversation requires the interpretation of many complex and subtle eye movements that signal everything from interest and surprise to concern or boredom.
- Primary user distraction. As the user engages with mixed reality content displayed on their screen, they are likely to become momentarily distracted from conversation. For example, a slight eye flicker towards their periphery when a notification pops up. But this might actually be an improved interaction compared to what happens with smartphones today, where attention diversion is much more obvious. (A notification sound chimes and/or the screen flashes, we pull the device towards us, look down, type a response with both hands while a frustrated friend drums their fingers on the table.)
- Unfamiliar social context. As a new piece of tech, it can be uncomfortable and distracting to interact with someone using it. There are no cues or social norms (yet) for how to interact with someone wearing such a device. How should we approach them without interrupting their activity? How do we know when the device is “in use” (assuming that in the future such a device could be worn for long periods of time, otherwise just wearing it would indicate that it was “in use”.) And probably the most simple but important scenario: how do we know the user is looking at us rather than content on her screen? (We’ve all been in that awkward situation where we’ve waved back at someone only to realise that they were actually waving at someone else behind us.)
These issues are not just a problem for Meta but for other hardware developers in the mixed reality space. As pictured below, Microsoft Hololens and Magic Leap may also face similar backlash from secondary users. However, in certain contexts it may be perfectly acceptable to wear and see others wearing these headsets (for example, work or educational). If you work in the tech, digital or IT industries you may even look forward to playing with these gadgets.
But aside from such contexts how could we solve these privacy concerns and social interaction shortcomings?
- Signals that indicate function modes. Mixed reality hardware should feature signals that indicate to secondary users that certain functions are in operation. If a user is recording a video feed via the headset, there should be a signal to indicate that video recording is in progress. If a secondary user is also wearing a headset this task becomes easier: you can display this information in their mixed reality content stream. How many signals you might need and which features are more important to highlight is unclear. However, rapid prototyping “fake” headsets to test different types of signals and noting the experiences of primary and secondary users could be very useful.
- Privacy settings. An individual should have the right to determine when they are recorded. Unfortunately the horse has already bolted in terms of privacy and technology. Currently, people seem to have an expectation that they can record anything they like on their smartphone but at the same time expect a degree of privacy from other people with smartphones. Current Australian government legislation is not entirely clear about the circumstances in which individuals may record conversations or digital interactions. In the future, when everyone is recording everything all the time, can we really expect government agencies to police and enforce who is recording what? Or should these safeguards be built in to the hardware/software itself from the outset? Moreover, facial recognition software has developed in leaps and bounds since the release of Google Glass. Mixed reality will allow anyone to view a person and search for their digital identity online instantly. Useful at networking conferences but potentially dangerous at nightclubs or bars. Will there be the equivalent of a “mixed reality SEO scrambler” to stop people from being recognised?
- Fair exchange of utility and value. Many people enjoy free digital services knowing (to an extent) that they are “paying” for this convenience with data about themselves and their online behaviour. History has shown that users are happy to give up a degree of privacy and initial social awkwardness (when trying new products or services) in exchange for what they perceive to be useful or essential applications (email, restaurant recommendations, travel directions, photo backups etc). Mixed reality software and applications will need to compensate users for any inconvenience with what is perceived to be adequate data, tools or services.
- Mass adoption. Not so much a solution but an evolution of technology uptake. If everyone is wearing a mixed reality device, it evens out the playing field. The ability for individuals to control their own image presents the public with a strong incentive for rapid mass adoption of such technology: you need to own this hardware in order to be excluded from default recording.
These are very real issues that will affect users in the very near future. But what about beyond the current hardware iteration of augmented reality headsets or glasses? What might lie ahead?
An obvious evolution from headsets to glasses is contact lenses. Google has already made pioneering work in this field. They have developed contact lenses that monitor glucose levels in people with diabetes.
I’m a massive fan of William Gibson. I was hooked as soon as I read his iconic cyber punk novel Neuromancer. I loved the character Molly Millions who had some very extensive augmentations, one of which included a pair of vision-enhancing mirrored lenses implanted within her eyes sockets.
I was intrigued by what Molly would have been able to see. But I was also conscious that her augmentation was irreversible, thereby changing irrevocably the way she interacted with other people. It would have been startling to encounter her for the first time. Similar to engaging with someone wearing sunglasses indoors as mentioned earlier in this post, Molly would not have been able to exhibit social cues that are normally expressed through the eyes.
In 2008 I wrote a science fiction novel. Set in the near future of 2045 it explored themes of identity, memory, augmentation and the extent of personal agency. I found it was an extremely useful way of exploring the context and use cases for future technology. One piece of technology that I developed was mixed reality content streams accessible through “cat lenses”.
Many animals, including cats, have a third eyelid known as a palpebra tertia or nictitating membrane which helps maintain the health of the eye. In my novel, people had cat lenses surgically implanted within their eye sockets. But unlike Molly’s mirrored lenses which remained intact permanently, cat lenses could close and retract via muscles around the eye. In closed mode, the eyelid completely covered the eye. The user could view mixed reality content on the surface of the eyelid, displaying content over their full field of vision. When fully retracted, the cat lenses were no longer visible. This functionality also provided secondary users with a “signal” for cat lenses’ operational modes: lids closed (augmented vision on) or lids open (augmented vision off). The lenses themselves could change appearance depending on the user’s preference from completely opaque to partially transparent or displaying an image. The design also provided primary user’s with the ability to revert back to a “normal appearance” facilitating regular social interaction.
Although Google Glass joined the ranks of technology that was never mass adopted, it has served as an important case study into the way primary and secondary users interact with augmented reality. There are valuable lessons for mixed reality hardware design.
- The way hardware looks on the primary user.
- The perception of the primary user’s behaviour while they are wearing the hardware.
- Privacy of secondary users who are viewed by primary users.
- Signals that indicate the hardware’s functionality and whether they are in operation.
At the moment, mixed reality is little more than novelty. A source of momentary amusement. But soon, there will be an explosion of content ideation and development. A renaissance of art, science and engineering based on digital content interacting with real world elements. But the ability to capture, document and share those experiences will be a delicate balancing act between the primary user’s experience and the impact of their behaviour on secondary users around them.