Mixed reality for the perfect pilates work out

One of the most obvious use cases for mixed reality is the ability to locate two dimensional displays in space. These days digital screens seem to be everywhere: airport lounges, bus shelters, highway billboards, fixed to the ceiling in dentist rooms. But there are a number of drawbacks including:

  • Cost (initial purchase and electricity).
  • Installation.
  • Fragility.
  • Maintenance.
  • Obsolescence and need for replacement.

So the ability to place a virtual screen wherever you like and at little or no cost, presents an almost infinite range of potential use cases.

But there’s one particular situation I’ve been considering for a while: how mixed reality displays could improve exercise.

Proprioception is the sense of one’s own body. Where our body is located in space, how it moves and the relative position of parts of the body. For example, close your eyes, stretch out an arm, then try to touch your nose. In order to complete this task you need to have an idea of the length of your arm, the angle at which your arm is bent and the location of your nose. But this ability is not just for touching your nose in the dark.

“The ability to sense force, which is known as mechanosensation, provides humans and other animals with important information about the environment; it is crucial for social interactions, such as comforting or caressing, and is required for motor coordination… Similarly, proprioception is considered to be essential for posture and controlled movement, but little is known about the underlying mechanisms and the precise role of this sense.” (From The New England Journal of Medicine.)

Undertaking any new physical activity provides a real work out for not just the body but also the brain. If you’ve ever taken up a new exercise that requires a lot of coordination, you know what I’m talking about. There’s a clumsy dance between your brain and body, as they both try to talk to each other about how to coordinate these new movements.

How can we help this body/brain connection while learning new movements?

We can assist this process by showing the body what it’s doing in real time. Once we’ve been shown how to do something (by an instructor or video) we understand what the correct movement looks like: we just don’t understand what it should feel like. So it makes sense that if we could see what we’re doing, we can compare that image with a mental map of what we should be doing and then make the necessary adjustments.

How do we see what we’re doing? Easy. We can use mirrors. We could also record ourselves but we would’t receive the real time feedback comparing what we see with what we feel.

As humans we’ve long understood the use of this aid. It’s why dance studios have mirrors along their walls. As well as gyms and yoga studios.

I am relatively new to pilates reformer classes but have practiced pilates and yoga on and off for a while. For those who aren’t familiar this is what pilates reformer equipment looks like and some of the associated postures.

Figure 1. Various postures on a pilates reformer machine. (Original images from http://www.pilates-exercises.info)

I know. Looks like medieval torture. But I’ve gained a lot of health benefits like strength and flexibility. I also like the way the exercises force me to use my brain. I have to really concentrate. Part of this focus comes from watching my body in mirrors around the studio, adjusting my posture and movements with what I’ve been shown by the instructor. But this has some difficulties. Let’s take a look at the layout of my pilates studio including the placement of mirrors.

Figure 2. Layout of a pilates reformer studio.

Mirrors are located as follows:

  • One mirror in front of each reformer machine on the North wall.
  • A large mirror on the East wall.
  • A large wall mirror on the South wall located behind a trapeze table.
  • No mirrors on the West wall.

There are a lot of mirrors in the studio but it’s not always possible to see my reflection. This depends on my body position, posture, head angle, location of nearby mirrors and other people/objects in the studio.

In Figure 3, I’ve highlighted a rough estimate of a user’s central gaze in each posture (“near peripheral vision” approximately 30° either side of straight ahead, 60° total field of view).

  • A blue mirror indicates the user can see her reflection.
  • A red mirror indicates they cannot.

 

Figure 3. Central eye gaze of pilates reformer user and visibility of her own reflection in a nearby mirror.

As illustrated, the user can only see her reflection in three postures: A, E and F. Even then, it’s not possible to see her whole body at once.

Another complication is that these postures are not static as the user will move in different directions as per Figure 4.

 

Figure 4. Directional movement of pilates reformer user.

Ok, so today’s pilates reformer setup isn’t ideal.

But how could mixed reality help a student see how her body moves while she works out?

A mixed reality design facilitating proprioception during exercise

(Note, the following designs are based on the user wearing a mixed reality headset/glasses. Ideally the device should not impede movement or cause discomfort during the activity. Realistically, no current mixed reality hardware would be useful in this context as they are either tethered and/or too heavy for prolonged use during exercise. When mixed reality content can be displayed on to light weight glasses or contact lenses, this use case will be more likely.)

Mixed reality displays

Mixed reality offers the use of dynamic displays. Basically, these look like a two-dimensional screen hanging in space. This presents many advantages over a mirror or any type of digital screen in the real world.

A mixed reality display can be:

  • Displayed anywhere. It could be fixed to any plane/flat surface in the real world e.g. wall, table, floor or ceiling. If there are no planes, the display could “hover” in the air at a fixed and comfortable distance from the user within their central eye gaze.
  • Displayed at any size or ratio. The display could fill an entire wall of the studio or be the size of a smartphone screen.
  • Adjusted to move with the user. During exercise the user may move their head and/or body in different directions. The mixed reality display can move with the user to ensure continuity.

For our pilates reformer student, we can position a display at a comfortable distance and within their central eye gaze – no matter what position they are in or which direction they look as illustrated in Figure 5.

Figure 5. Mixed reality displays located within central eye gaze of pilates reformer user.

Capturing the ideal perspective

A mirror can only show a user’s reflection. Mixed reality can show much more. During a pilates class a user would ideally like to see their body from multiple angles without having to turn their head (which in itself could disrupt their balance, posture or movement).  Thus, in an optimal situation we should capture a user’s movement from different angles then select the ideal perspective that highlights overall movement.

We need cameras to capture the user’s movements. The number and placement of these cameras requires consideration.

  • Ideal recording angle and distance. For each possible posture, we determine the best angle(s) from which to view the entire body. Figure 6 illustrates that for a seated posture an ideal camera location is a 90 degree angle at approximately 1200mm height. The x distance depends on where other objects may be located in the vicinity of the user.

 

Figure 6. Camera location for ideal perspective.

As illustrated in Figure 7, the ideal recording angle could be obtained from a camera located on either side (B or C) or above the user (A) depending on the user’s posture.

Figure 7. Multiple camera locations providing alternative recording angles.
  • Camera locations. Reviewing the overall studio layout, we can locate cameras where they facilitate ideal recording angles for each pilates reformer machine. One thing to note is cameras need to be fixed to something. In this case study, I’ve attached them to the wall, ceiling and standing objects (e.g. weight machine, trapeze table).
Figure 8. Layout of pilates reformer studio with potential camera locations.

Camera feeds and content display feeds

Now that multiple cameras are capturing feeds from different angles, we can display these feeds to the user. But which one?

One solution is to allow the user to scroll through feeds and select a preferred view as demonstrated in Figure 9.  (Although this basic mockup uses images of different users it illustrates how the user can see themselves from different angles and make the selection through a rotating carousel format.)

Figure 9. Mockup of basic UI for a mixed reality display. Right and left arrows facilitate navigation through a rotating carousel of live camera feeds.

Interaction design

It’s important to note that in most pilates’ postures, both hands are occupied during exercise. So what types of interactions are possible while wearing a mixed reality headset?

  • Gesture. A gesture-based user interface could be used to access the menu before and after exercise. The user could open the system, navigate through menus and commence “exercise mode”. Once this mode has been engaged, the user must rely on other system inputs.
  • Gaze tracking. This may not be entirely useful during exercise mode as the user’s head will move during exercise. However, like gesture inputs, gaze tracking could be used to open and navigate menus, before and after exercise mode.
  • Voice commands. As long as the interface design is quite simple, voice commands can facilitate navigation through camera feeds and open/close “exercise mode”. In the mockup illustrated in Figure 9, the user could say “next feed” to view the next camera feed (i.e click the right arrow) or “previous feed” to view the previous camera feed (i.e click the left arrow). One drawback is that some users may feel uncomfortable about issuing voice commands in a public space. My experience in pilates reformer classes and gyms is that there is often music playing or people chatting so it might not be too awkward.

Potential issues

  • Unobstructed feeds. Clear camera angles may be difficult to obtain in a busy studio. People and equipment are constantly moving. When reformers are side by side, it isn’t always possible for the user to have an unobstructed view of themselves in a mirror or a camera feed. This may require rearranging the location of machines within a studio.
  • Which camera feeds? The system must be set up in such a way or “smart” enough to know which camera feeds to display to each user. Thus, the user is only presented with angles of themselves. One solution is that the system recognises which reformer machine is in use and therefore which cameras will provide the ideal recording angles.
  • Privacy. Capturing live video feed presents an opportunity to record these feeds for later viewing. This may be useful to individual users to improve their practice. However, all studio clients should consent to recording.
  • Safety. Overlaying a mixed reality display over the real world is a potential safety hazard. Display feeds should not obstruct vision. One solution is for the system to recognise when objects enter the gaze area or within prescribed distance around the user. For example, someone in the real world  “walks through” the display and appears to stand in front of it.

Additional features

The rapidly expanding field of Artificial Intelligence (AI) could take these live camera feeds and provide the user with additional information.

  • Live personal instructor. Monitor a user’s movements and alert them if they move beyond an “ideal” range. For example, when the user’s posture is out of alignment or limbs are in an incorrect position which could lead to discomfort or injury. The alert could be displayed through a visual alert and/or audio message.
  • Guide attention. Highlight within the live camera feed areas on the user’s body which muscles should be engaged for a particular posture. For example, using hamstring muscles rather than back or abdominal muscles.

Future applications

Pilates may seem like a quirky use case to explore but there are serious applications that could benefit from mixed reality assisted proprioception.

Stroke is one of Australia’s biggest killers and a leading cause of disability. One particular problem stroke victims can experience is difficulty planning or coordinating movement known as apraxia. They can also feel slow or clumsy when coordinating movements which is known as ataxia.

In conjunction with a physiotherapist a mixed reality system could become part of a Proprioceptive Neuromuscular Facilitation (PNF) program:

  • Real time feedback. Showing a stroke victim how they walk from different angles as they walk. The visual input helps them make a mental association between what they are seeing with what they are feeling.
  • Mixed reality displays versus mirrors.  Stroke victims may find it difficult to turn their head and look into a mirror while trying to coordinate their own walking movements at the same time. Additionally, it is not always possible to locate large mirrors in many positions around a rehabilitation centre.
  • Voice commands. Stroke victims rely on their hands and arms to steady themselves while learning to walk. Thus their hands are not available to operate a smartphone or other device. While someone else could operate a system on their behalf, voice commands provide stroke victims with autonomy and independence to use the system themselves.
  • Record and analysis. Later viewing can assist a stroke victim to see and understand their own movements. Working with a physiotherapist they can plan future therapy sessions collaboratively.

Is ZapBox the ultimate mixed reality rapid prototyping tool?

Zappar, the makers of ZapBox, have provided their Kickstarter backers with a beta project to play around with, prior to the arrival of their Zapbox kits.

The aim of their project is to help developers create mixed reality content and for users to experience that content without the need for expensive hardware. Going the route of Google’s successful foray into accessible virtual reality content via Cardboard, Zapbox is a simple cardboard viewer which holds a smartphone running the ZapBox app on iOS or Android. Using printed black and white markers, the smartphone’s camera is able to map the surrounding environment and position mixed reality content.

Cheap and fast prototyping

As per my previous post, I’m excited about being Project Backer Number 403 and adding ZapBox to my prototyping toolkit.

Rapid prototyping with tools such as paper, cardboard, role playing and game pieces is important, but there comes a point working in the mixed reality space when you really need to get a feel for where objects are located in 3D space, how they appear against the real world and how users will interact with those elements. The ability to test a hypothesis, quickly and cheaply, can save a lot of time and guide content creation down more fruitful avenues.

There is certainly a gap in the market for a cheaper price point, even in these early days of mixed reality. For example, the development edition of the Microsoft Hololens will set you back $4,369 while the Meta 2 development kit costs $949. But how does ZapBox compare in terms of features and functionality? The ZapBox team undertook a review of the mixed reality hardware landscape to provide a comparison.

Although ZapBox has strong technical features it lacks an important feature compared to other hardware: markerless tracking. My understanding is that printed markers are required to:

  • identify a real world object that has mixed reality properties
  • position mixed reality objects and
  • map the physical environment in which mixed reality elements will move.

Despite this weakness, I still believe it will be a great tool for early stage testing of concepts and basic interactions with users.

Testing out the demo for ZapBox beta

The demo included:

  • 3 x PDF printouts for the controller (shaft, diamond/ hexagonal dome and cap)
  • 1 x PDF printout for the tracking markers
  • ZapWorks Studio file
  • ZapCode.

I printed and cut out the controller PDFs on standard A4 paper. Since the shaft was going to have the most handling I decided to reinforce it with lightweight cardboard. I measured and noted the same fold lines as the printout, scored the cardboard along the fold lines with a blade then glued the printed sheet. This felt much sturdier than paper alone.

20170417_115621

When I tried to attach the controller diamond to the shaft it was pretty fiddly. Then I realised there were little lines along one side of each hexagonal piece.  I’ve marked this in red in the image below. This line actually represents a cut. I recommend using a blade to create these rather than scissors. The blue flaps then tuck neatly into each adjoining cut. (Hey Zappar! If you’re going to use these PDF templates in the future, please indicate “cut lines” using colour, a dotted line or something. Thanks!)

After some fiddly handling I inserted all the flaps in the diamond then glued it to the shaft, adding tape for extra strength. Voila!

Now it was time to test it out. I opened the ZapBox beta app on my Android smartphone and clicked Get Started.

Next I scanned the demo ZapCode. When the code had finished loading (just a second or two) I focused my camera on the controller and markers.

Screenshot_2017-05-05-08-28-33

Success! The controller diamond transformed into a blue flower while the tracking sheet displayed a large 3D button. Using the controller/flower I could press the button. This resulted in the button depressing, my smartphone screen flashing red and an alarm sound.

A few notes on my own user experience

I felt it was important to document my initial user experience. Too often, we rush into using new technology, eager to figure out how everything works before optimising for maximum efficiency. But we soon forget the novelty of our first experience. The new features that surprised and delighted us. Or the frustration and confusion of something that wasn’t intuitive. (It doesn’t work. Is it just me? Who the heck designed this?) Standing back and reflecting on our own experience is also a good reminder down the track, when we design new interactions or our designs are placed in the hands of new users.

  • Not hands free, yet. It was fiddly holding my smartphone in one hand and the controller with the other while trying to aim the camera in the right direction. But this shouldn’t be an issue with ZapBox as it will be strapped to one’s head, leaving hands free to move and manipulate the controller(s).
  • It’s pretty! The controller diamond’s flower was aesthetically pleasing. It was a nice middle ground, not too cartoon-like but also not trying too hard to be realistic. Blue was also a good colour selection creating a strong contrast against the real world environment.
  • Tracking. The app was able to render the controller diamond’s flower very well. Moving closer to the flower allowed me to see more detail. An interesting side note is that when the controller’s diamond cap was removed, I coulld see “inside” the flower. Not sure if this was part of the design, but it was cool.
  • X, Y, Z movement. Moving the controller to press the button was a little tricky. I had to get my head around how to move my hand through Z space, not just up and down. It didn’t take long to get used to but it wasn’t as intuitive as I anticipated. I recall a familiar experience when I learned how to use Leap Motion for the first time.
  • Absence of tactile/haptic feedback. Pressing the button was cool but I missed the satisfying sensation of resistance as the button was “clicked”. The Zappar team’s use of visual (red flashing screen) and audio cues (alarm) is a great way to compensate for this feedback absence. It provides the user with positive feedback as a direct result of their physical action. This will be an important feature in the design of future mixed reality user interfaces.

Developing content with ZapBox

Mixed reality content for ZapBox is developed through the ZapWorks Studio with dynamic and interactive experiences coded via TypeScript, a form of JavaScript. The demo came with a ZapWorks project file so you can see all the nifty code behind the interactions. Here’s some sample code.

There are several video tutorials available on the Zappar YouTube channel which cover how to use ZapWorks Studio to create content and program interactions. I’ve been through all the tutorials and it looks fairly straightforward. So the next step will to use the controller to create an experience of my own. I’m keenly awaiting the arrival of my ZapBox through the Kickstarter project. In the meanwhile I already have a few ideas in the design pipeline.

So is ZapBox the ultimate mixed reality rapid prototyping tool?

I’ll only be able to answer this question once I’ve had the chance to create a prototype but so far it looks good.

  • Inexpensive development. Apart from purchasing some ZapCodes ($1.50 each through a Personal Developer account) there is no cost to create prototypes. The only other materials you need are paper and cardboard. All content is hosted on Zappar’s ZapWorks Studio platform.
  • Fast development and iterations.  Changing content is as easy as updating and committing your code then rescanning the ZapCode.
  • Quick and easy user testing. There is very minimal set up for the user before they can start interacting with content. The headset does not require any wires so the user experience is completely untethered.

The quality of mixed reality experiences will depend heavily on the way in which users interact, move around, and perceive mixed reality objects against a real world backdrop.  The design process will benefit from early insight into the flaws or weaknesses of initial designs via prototyping, testing and iterations. Tools like ZapBox can help provide this insight before developers create detailed designs for more complex and expensive mixed reality hardware.

Fantastic beasts… and gnomes in your garden

I think there’s something truly magical about the prospect of mixed reality. As children we grow up with fairy tales about fantastic creatures living in other worlds or long lost lands. Dragons, elves, unicorns, fairies… and maybe even gnomes.

My friend posted this photo on Instagram today.

Another photo revealed “windows” higher up the same tree.

What a wonderfully creative and charming way of engaging people’s imagination as they walk by!

There is something whimsical and intriguing to imagine that friendly magical creatures live among us, demonstrated in the enduring popularity of fiction from classic fairy tales to contemporary works such as J.K. Rowling’s Fantastic Beasts and Where to Find them. But there are few opportunities to experience these characters in real life. Although amusement parks like Disneyland include amazing buildings, life size characters and special effects, you have to travel to select locations.

There is an opportunity for mixed reality to create magic on our very own doorstep. It’s easy to imagine how mixed reality could extend this cute idea of a “fairy tree”:

  • A fairy living in your very own garden. She might come out in the morning and water her plants. On a sunny day, set up a deck chair and read a book.
  • A collective of fairies living in a local park. They come out of their homes, visit each other and shelter under leaves when it rains.
  • At night, the windows light up or a lantern is lit outside their door, to show that they’re home.
  • You could leave tiny notes in their mailboxes and the next day you’d have a response, in your own mailbox.

This has interesting implications for brands like Disney. Yes, people may still want to travel to Disneyland and experience the magical world of Disney through all five senses. But the company will also have an opportunity to design experiences that can play out within people’s own homes. Instead of just watching DVDs, dressing up and playing with figurines, children will be able to act out their favourite scenes with the characters themselves. (Extending this idea, you can also imagine the fun that people could have acting out the scenes of their own favourite movies.)

And if you’re in any doubt as to whether adults and not just children would be interested in these types of experiences and exchanges, here’s an example. Located in Bunbury, 200km south of Perth in Western Australia, lies a little village called  Gnomesville populated solely by gnomes. Over the years it has become a tourist attraction in it’s own right as people from near and far arrive, to discover the secret lives of gnomes.

Augmented reality content streams: communication at concerts

I went to an awesome concert last night. Vincent McMorrow played at the Chevron Festival Gardens in Perth. Throughout the evening I enjoyed the music, venue and great atmosphere. But before and during the concert, I couldn’t help notice the other concert goers and imagine how mixed reality might improve everyone’s future experiences.

Crowd safety and security

I don’t like the term crowd control. It makes it sound like you’re herding cattle or something. I think what we’re all aiming for, as concert attendees and staff, is crowd safety. I’ve been to quite a few concerts and dance parties in Australia and overseas. It’s great being up the front, dancing near the DJ or band, in the thick of it among all the other enthusiastic fans. But it can get hectic at times and I’m short. On more than one occassion the crowd has surged forward and I’ve felt a sense of panic and claustrophobia. Thankfully, those moments have only lasted a few seconds. But others are not so lucky. Sometimes it’s not the crowd en masse that presents a potential danger but a one off situation. Two guys start a fight. Someone faints. Someone else climbs the lighting rig at Alison Wonderland’s secret warehouse party (seriously dude, not cool). In those instances, venue staff have to get to those people and quickly. But it’s not that easy.

Firstly, someone has to alert staff to what’s going on. A staff member has to make their way to that spot. Then they have to assess the situation. They might have to administer first aid. If they need additional help, they have to call for other staff and explain where they are located. If people have to be moved, they need to clear a path through the crowd.

Essentially, what we’re trying to improve is communication between security, first aid responders and the audience. How do we transmit important information between people quickly and efficiently?

A key design consideration is the environment. A concert space is sensory overload. An audience wants to be wowed with amazing visual and audio effects from the artist they’ve come to see. Crowds make a lot of noise and can obscure vision. This all makes it very difficult for venue staff to see and hear (even with headphones) what’s going on around them and to communicate with their colleagues.

How could mixed reality aid communication in situations like these?

I’ve explored some key considerations from the perspective of someone who works at the concert. I’ve mocked up some very simple designs and will take you through my thought process.

Location of an incident

The central element in my design is a totem or navigational pole that pinpoints a potential incident as per the image below. I’ve deliberately taken something from the real world in order to capitalise on an existing system of meaning. That is, it’s easy for people to understand what an object may mean or do in a mixed reality universe because it behaves in a similar way to its real world counterpart. I’m not creating or trying to teach them a whole new system of meaning (not in this case anyhow but there are exceptions).

crowd01
Design No.1 (Original image by Kmeron via Flickr).

So these poles would function just like navigational signs in the real world. They are fixed to one location so that poles in the distance appear smaller than those in the foreground. This might sound very obvious but it’s actually a key difference between augmented reality and mixed reality. Augmented reality would layer these objects statically over your vision. But in mixed reality these poles behave like real objects: they are fixed to a single specific location, remain a fixed size in space, disappear from view when something is placed in front of it and you can approach and walk around them. (Of course you could also adjust any of these properties as the situation required.)

Type of incident

Each totem pole is topped with an emoji which describes the current issue. For example:

  • The “angry emoji” represents an aggressive person.
  • Two angry emojis represent a fight between two people.
  • The “emoji with mask” is a person in need of medical attention.

I’ve used emojis rather than text because I think images and symbols communicate faster than words. I also believe in the KISS principle and feel that text would be unnecessary clutter. If the priority is to get assistance to a location as quickly as possible then we should only communicate pertinent information. Emojis have another advantage over text: they are not bound by language. So two different people can quickly understand the system, and each other, even if they don’t speak the same language. A great idea for large festivals with attendees from different countries.

I’ve steered clear of menus or other complicated UI, and with good reason: it’s difficult to introduce elements to someone’s field of view when they’re mobile (or stationary but their head moves frequently). At Occulus Connect, Patrick Harris (Lead Game Designer at Minority Media) described the difficulties of designing UI for the virtual space. “There are no corners of the screen when you put on an Oculus Rift! That’s not a location that exists… You need to realize that anything in the ‘corner’ is actually [the player’s] peripheral vision.”

So rather than impose a static menu over a person’s vision, mixed reality objects can be more natural because they appear as real objects blending in with the existing environment. But therein lies another problem. Mixed reality elements need to contrast enough so that they are noticeable within the real world environment.

Use of colour

I’ve used a bright orange colour for the totem poles. I could use red and green in the colour palette later but I’d have to be careful not to assign opposing values (on/off, yes/no) that aren’t also distinguishable by symbols or text since that could disadvantage people who are colour blind. However, my mockup quickly revealed that orange is difficult to see against bright stage lights. It might be better to use a brighter colour with stronger contrast against a range of different lighting and potential stage effects (pyrotechnics, cryogenics, flames etc).

In fact, it may be necessary to use mixed reality to occlude vision from those effects before overlaying the totem poles. That is, overlay a dark background and then add the totem poles. (This will be easy to do in the future, as I predict that concerts will have a high degree of mixed reality content in themselves. If so, venue staff could simply opt out of the “concert content stream” and focus on the “venue/security/safety stream”. I’ll explore the notion of mixed reality content streams in a future post.)

Urgency

Based on experimenting and some findings with the first design, I iterated and created an updated version. I’ve changed the pole colour to bright blue which stands out much better.

crowd03
Design No.2

So next I wanted to consider: what if multiple incidents are happening simultaneously? How do I know which one is the most urgent? There needs to be a design element to communicate a sense of urgency, allowing staff to prioritise their actions. In the image above, I used solid blue lines for the poles to indicate more serious situations versus dashed lines for less serious situations. But thanks to the mockup I can already see a design flaw: from a distance dashed lines look solid anyway, so you can’t really distinguish between them.

Another way to communicate urgency could be to make the totem poles and emojis flash or animate. But this may still be hard to distinguish against crowd movement, stage lighting and special effects. A single static icon may be the simplest way to express this which I have incorporated as a red exclamation point over the emoji (see image Design No.3 ).

Assigning tasks

With multiple staff on shift, how would I know whether someone was already on their way to deal with a specific incident? My design mock up includes a simple white circle placed at the bottom of the pole to represent a staff member. At first, I included profile images within the circles. But for the sake of simplicity I removed them. I think the important thing is to know that someone is on their way, not who. (This information could always be added later). Multiple people attending an incident is represented by multiple white circles.

I quickly realised that the totem pole design also provides a neat solution to the problem of knowing how far a staff member was from an incident. As the staff member approaches the totem pole, the white circle that represents that individual also moves up the pole. Once the white circle reaches the emoji you know that someone has reached the incident location. And because the totem poles are showing data in real time, you can see how fast someone is moving towards that location.

Design No.3

Emergency exits

If a patron needed emergency medical attention they might need to be moved out of the crowd or building as quickly as possible. Although venue staff may know exactly where exits are located, it may not be clear how to reach them. Especially if it’s dark, noisy and crowded.

Mixed reality could map the room and obstacles, then indicate the quickest and safest route to the nearest exit. In part this could be done via object recognition but integrating the venue’s floor plan would be ideal. If patrons had access to the venue’s mixed reality content stream, a message could also be transmitted, requesting them to move aside to allow staff to move an injured person. In this way, a mixed reality stream could also serve as a public address system.

Design No.4

UI and feedback

User interface (UI) design for virtual, augmented and mixed reality is a very nascent field. I would like to cover it in more detail in another article but for this present design I will keep things simple. I wish to explore how a user could navigate this information in the context of a concert but omit how the user may input their selection (e.g. eye tracking, hand gestures, voice commands).

Design No.5 presents several key frames in this design:

  • The user can see all the totem poles at once.
  • The user activates a “selector tool” that allows them to focus on a particular mixed reality element in their field of view (illustrated via a simple blue crosshair to denote centre of screen) and select that element for further inspection.
  • Once an element has been selected a dialog box opens with additional detail (nature of the incident, specific requests and profile image of first responder) while all other elements in the field of view are “muted” (they become transparent).
Design No.5

Streams of mixed reality content

In this article I’ve predominantly covered the perspective of venue staff and first aid responders. The various mixed reality elements in this design are focused on providing those individuals with the information they need, at the right time (Just In Time Content). The “stream of content” is for their view only and can be switched on or off. In theory, elements of this stream could be shared with audience members.

As mentioned earlier I believe that mixed reality will become an integral part of future concert experiences. The audience will be privy to a private “stream” through which they can view all the mixed reality content presented by an artist. While venue staff could tune out/opt out of that stream and tune into their own security stream.

However, in emergency situations there could be elements that are made public and shared to all streams like a public broadcast, so that the audience and staff can access the same content. For example, if a fire broke out in one corner of the building a mixed reality interface could:

  • Trigger an announcement that the concert has stopped and requesting everyone to evacuate the building calmly.
  • Map safe evacuation routes.
  • Trigger a “warning” attached to all doors and routes leading to the dangerous area.
  • Highlight individuals who work at the venue so that they are easily identified.
  • Allow people who need assistance to “flag” themselves (elderly, parents with young children, physically disabled).
  • Allow dangerous obstacles or situations to be “flagged”.

Emergency responders could also access this mixed reality content stream to assist their response.

  • The exact location of the fire could be mapped out on a floor plan but also through mixed reality elements indicating multiple access routes.
  • Access to information about “flagged” areas indicating potential danger.
  • Easily highlight venue staff as points of contact.
  • Highlight important building features such as the location of water and gas pipes and electrical wiring.
  • The number and location of people still remaining in the building (assuming that patrons would allow themselves to be tracked at an event as a condition of entry).

So the next time you go to a concert, consider the kind of information you may want to access if you or a friend were in need of assistance. Or how the information you share could assist other patrons. In the meanwhile, enjoy the show!

Image credit: Kmeron via Flickr