Is ZapBox the ultimate mixed reality rapid prototyping tool?

Zappar, the makers of ZapBox, have provided their Kickstarter backers with a beta project to play around with, prior to the arrival of their Zapbox kits.

The aim of their project is to help developers create mixed reality content and for users to experience that content without the need for expensive hardware. Going the route of Google’s successful foray into accessible virtual reality content via Cardboard, Zapbox is a simple cardboard viewer which holds a smartphone running the ZapBox app on iOS or Android. Using printed black and white markers, the smartphone’s camera is able to map the surrounding environment and position mixed reality content.

Cheap and fast prototyping

As per my previous post, I’m excited about being Project Backer Number 403 and adding ZapBox to my prototyping toolkit.

Rapid prototyping with tools such as paper, cardboard, role playing and game pieces is important, but there comes a point working in the mixed reality space when you really need to get a feel for where objects are located in 3D space, how they appear against the real world and how users will interact with those elements. The ability to test a hypothesis, quickly and cheaply, can save a lot of time and guide content creation down more fruitful avenues.

There is certainly a gap in the market for a cheaper price point, even in these early days of mixed reality. For example, the development edition of the Microsoft Hololens will set you back $4,369 while the Meta 2 development kit costs $949. But how does ZapBox compare in terms of features and functionality? The ZapBox team undertook a review of the mixed reality hardware landscape to provide a comparison.

Although ZapBox has strong technical features it lacks an important feature compared to other hardware: markerless tracking. My understanding is that printed markers are required to:

  • identify a real world object that has mixed reality properties
  • position mixed reality objects and
  • map the physical environment in which mixed reality elements will move.

Despite this weakness, I still believe it will be a great tool for early stage testing of concepts and basic interactions with users.

Testing out the demo for ZapBox beta

The demo included:

  • 3 x PDF printouts for the controller (shaft, diamond/ hexagonal dome and cap)
  • 1 x PDF printout for the tracking markers
  • ZapWorks Studio file
  • ZapCode.

I printed and cut out the controller PDFs on standard A4 paper. Since the shaft was going to have the most handling I decided to reinforce it with lightweight cardboard. I measured and noted the same fold lines as the printout, scored the cardboard along the fold lines with a blade then glued the printed sheet. This felt much sturdier than paper alone.

20170417_115621

When I tried to attach the controller diamond to the shaft it was pretty fiddly. Then I realised there were little lines along one side of each hexagonal piece.  I’ve marked this in red in the image below. This line actually represents a cut. I recommend using a blade to create these rather than scissors. The blue flaps then tuck neatly into each adjoining cut. (Hey Zappar! If you’re going to use these PDF templates in the future, please indicate “cut lines” using colour, a dotted line or something. Thanks!)

After some fiddly handling I inserted all the flaps in the diamond then glued it to the shaft, adding tape for extra strength. Voila!

Now it was time to test it out. I opened the ZapBox beta app on my Android smartphone and clicked Get Started.

Next I scanned the demo ZapCode. When the code had finished loading (just a second or two) I focused my camera on the controller and markers.

Screenshot_2017-05-05-08-28-33

Success! The controller diamond transformed into a blue flower while the tracking sheet displayed a large 3D button. Using the controller/flower I could press the button. This resulted in the button depressing, my smartphone screen flashing red and an alarm sound.

A few notes on my own user experience

I felt it was important to document my initial user experience. Too often, we rush into using new technology, eager to figure out how everything works before optimising for maximum efficiency. But we soon forget the novelty of our first experience. The new features that surprised and delighted us. Or the frustration and confusion of something that wasn’t intuitive. (It doesn’t work. Is it just me? Who the heck designed this?) Standing back and reflecting on our own experience is also a good reminder down the track, when we design new interactions or our designs are placed in the hands of new users.

  • Not hands free, yet. It was fiddly holding my smartphone in one hand and the controller with the other while trying to aim the camera in the right direction. But this shouldn’t be an issue with ZapBox as it will be strapped to one’s head, leaving hands free to move and manipulate the controller(s).
  • It’s pretty! The controller diamond’s flower was aesthetically pleasing. It was a nice middle ground, not too cartoon-like but also not trying too hard to be realistic. Blue was also a good colour selection creating a strong contrast against the real world environment.
  • Tracking. The app was able to render the controller diamond’s flower very well. Moving closer to the flower allowed me to see more detail. An interesting side note is that when the controller’s diamond cap was removed, I coulld see “inside” the flower. Not sure if this was part of the design, but it was cool.
  • X, Y, Z movement. Moving the controller to press the button was a little tricky. I had to get my head around how to move my hand through Z space, not just up and down. It didn’t take long to get used to but it wasn’t as intuitive as I anticipated. I recall a familiar experience when I learned how to use Leap Motion for the first time.
  • Absence of tactile/haptic feedback. Pressing the button was cool but I missed the satisfying sensation of resistance as the button was “clicked”. The Zappar team’s use of visual (red flashing screen) and audio cues (alarm) is a great way to compensate for this feedback absence. It provides the user with positive feedback as a direct result of their physical action. This will be an important feature in the design of future mixed reality user interfaces.

Developing content with ZapBox

Mixed reality content for ZapBox is developed through the ZapWorks Studio with dynamic and interactive experiences coded via TypeScript, a form of JavaScript. The demo came with a ZapWorks project file so you can see all the nifty code behind the interactions. Here’s some sample code.

There are several video tutorials available on the Zappar YouTube channel which cover how to use ZapWorks Studio to create content and program interactions. I’ve been through all the tutorials and it looks fairly straightforward. So the next step will to use the controller to create an experience of my own. I’m keenly awaiting the arrival of my ZapBox through the Kickstarter project. In the meanwhile I already have a few ideas in the design pipeline.

So is ZapBox the ultimate mixed reality rapid prototyping tool?

I’ll only be able to answer this question once I’ve had the chance to create a prototype but so far it looks good.

  • Inexpensive development. Apart from purchasing some ZapCodes ($1.50 each through a Personal Developer account) there is no cost to create prototypes. The only other materials you need are paper and cardboard. All content is hosted on Zappar’s ZapWorks Studio platform.
  • Fast development and iterations.  Changing content is as easy as updating and committing your code then rescanning the ZapCode.
  • Quick and easy user testing. There is very minimal set up for the user before they can start interacting with content. The headset does not require any wires so the user experience is completely untethered.

The quality of mixed reality experiences will depend heavily on the way in which users interact, move around, and perceive mixed reality objects against a real world backdrop.  The design process will benefit from early insight into the flaws or weaknesses of initial designs via prototyping, testing and iterations. Tools like ZapBox can help provide this insight before developers create detailed designs for more complex and expensive mixed reality hardware.

Primary and secondary user experiences with mixed reality hardware

The way a user looks while using technology is crucial. Not just in an aesthetic “I want to look cool” kind of way but for most mainstream technology users, more of an “I don’t want to look like an idiot” way. The way a user feels and how they think they are perceived by others determines whether they feel comfortable while using a piece of technology and ultimately, whether they continue using it.

Additionally, a user’s appearance can also communicate messages to the people around them. These messages may be deliberate or unintended. In this post I explore how user experience in mixed reality should not only consider the primary user but also secondary users.

An early insight into augmented reality user experience issues

Google Glass (2013 – 2015)

Ah Google Glass. You were so exciting at the time, but quickly fell victim to the social equivalent of a pitchfork mob hunting down Frankenstein’s monster.

Early adopters of Google Glass were nicknamed “glassholes” (I’ll be polite and refer to them as Glassers) by members of the public who encountered them. Although I never came in direct contact with someone using Glass I believe the hostility towards these users wasn’t due solely to their behaviour but perhaps their “perceived” actions in two ways.

Firstly, from what I have read about people’s experiences when engaging with a Glasser, they felt that Glassers encroached on their private space by recording them without their permission. Secondly, people felt unsettled and annoyed when engaged in conversation because the Glasser’s gaze darted to the small display screen in the corner of their field of view. Ironically, what people may have experienced were two extremes of the same spectrum.

On one hand they may have felt as though they were being probed and examined without their consent, knowing that once recorded, their version of “self” could be re-probed and re-examined infinitum for any number of flaws and weaknesses. On the other hand, their ego may have been bruised during the interaction because they didn’t command the Glasser’s full attention.  Many couples have arguments that start simply because their partner does not appear to be really listening (although they can successfully repeat the last thing that was said). So privacy and social interaction norms appear to be two critical issues, not for the user herself but the people around her. That is, secondary users.

This is really interesting food for thought. Currently, user experience design for mobile technology focuses predominantly on the user. But I wonder how many designs consider (either implicitly or explicitly) the experience of their hardware or software on people with whom the user interacts.

Recording and privacy

When smartphones were first designed with cameras, the “red recording light” feature that was commonplace in most video recording devices at that time, was omitted. Thus, the only signal that indicated a device was in “recording mode” was removed.

This omission has been applied to all smartphones (as far as I’m aware) and widely accepted by people with a smartphone and with what seems to me, little objection. This design default worked against Google Glass when it was released. This is because it’s hardware design included three signals to indicate when video recording was in progress:

  • Illumination. The device’s screen was illuminated whenever it was in use, including recording video or a photo. Unfortunately this created the impression that the primary user was recording all the time, even when they were not.
  • Voice command. The primary user could speak a command – “Ok Glass, record a video” – to commence recording. However, if a secondary user was not present at the time the command was executed, they would not be aware that a recording had commenced.
  • Gesture command. The primary user could press a button on the Glass’s frame to commence recording. As per the previous signal, a secondary user had to witness this gesture to know a recording had commenced.

The illumination signal is the strongest cue to secondary users that a video recording is in progress. This is because it taps into a design pattern that is familiar to most secondary users of a certain age (the red light of an old fashioned video recorder.)

So what can we learn about secondary users’ experience from the demise of Google Glass? People desire, and have the right, to know when they are being recorded and with their consent. Smartphones have largely avoided this issue because they don’t advertise when they are in recording mode. Of course, simply holding your phone at a particular angle for a length of time has become the well known posture of someone taking a photo or recording.

uses-for-smartphone-camera

Currently the power of recording lies in the hand of the recorder. But what if we could reverse this situation? What if the subject could control when and how they were recorded? Or whether they were recorded at all?

Charlie Brooker’s Black Mirror series has provided deliciously entertaining (and chilling) examples of near future scenarios where modern day technology has been taken to extremes. In the “White Christmas” episode, users with enhanced augmented reality vision could block the image and sound of selected people from their field of view (replaced with a pixelated form as per the image below). In turn, those selected people could not see or hear the user. If both viewer and subject use the same mixed reality hardware/ software/ platform/ network then the subject can “cloak” themselves from recording.

white_christmas
Still from “White Christmas” episode of Black Mirror, TV series by Charlie Brooker.

This Black Mirror example demonstrates a scenario between two people, a “one-to-one” context. However, this could also become a default setting in a “one-to-many” context. For example, when the user is in a public space they are hidden by the cloaking effect but effect is deactivated when visiting a friend’s house.

While taking group wedding photos outdoors in summer, photographers in Australia often request that guests remove their sunglasses. Perhaps a future request will be to “deactivate your mixed reality privacy cloaks” granting the bridal couple permission to record their image.

This has an interesting impact on how we might record and reflect on future historical events. Or our memories of past events. Will our future holiday photos look like this?

white_christmas2
Still from “White Christmas” episode of Back Mirror, TV series by Charlie Brooker.

The above image is from the same Black Mirror episode. A user, found guilty of a crime, is punished via his augmented reality enhanced vision: he can no longer see nor hear other people for the duration of his sentence. An enforced digital isolation. But if private companies own and manage these mixed reality networks and associated content management systems, will government or law enforcement bodies have the power (or right) to police, intervene and control the content streams of its citizens?

Social interaction norms between primary and secondary users

Take a look at this picture.

Meta (2014 – present)

Would you feel comfortable having a casual chat with this guy? Perhaps not. It reminds me of talking to someone while they’re wearing sunglasses indoors. Both situations feel unsettling. Why?

  • Trust. His eyes are partially hidden by the headset (I’m not aware of how occluded a user’s eyes may become when viewing content along the spectrum from augmented to full virtual reality content). This means you can’t tell where he is looking and research has shown that people rely on predictive gaze cues as a way of judging whether a person is trustworthy. Engaging in a simple conversation requires the interpretation of many complex and subtle eye movements that signal everything from interest and surprise to concern or boredom.
  • Primary user distraction. As the user engages with mixed reality content displayed on their screen, they are likely to become momentarily distracted from conversation. For example, a slight eye flicker towards their periphery when a notification pops up. But this might actually be an improved interaction compared to what happens with smartphones today, where attention diversion is much more obvious. (A notification sound chimes and/or the screen flashes, we pull the device towards us, look down, type a response with both hands while a frustrated friend drums their fingers on the table.)
  • Unfamiliar social context. As a new piece of tech, it can be uncomfortable and distracting to interact with someone using it. There are no cues or social norms (yet) for how to interact with someone wearing such a device. How should we approach them without interrupting their activity? How do we know when the device is “in use” (assuming that in the future such a device could be worn for long periods of time, otherwise just wearing it would indicate that it was “in use”.) And probably the most simple but important scenario: how do we know the user is looking at us rather than content on her screen? (We’ve all been in that awkward situation where we’ve waved back at someone only to realise that they were actually waving at someone else behind us.)

These issues are not just a problem for Meta but for other hardware developers in the mixed reality space. As pictured below, Microsoft Hololens and Magic Leap may also face similar backlash from secondary users. However, in certain contexts it may be perfectly acceptable to wear and see others wearing these headsets (for example, work or educational). If you work in the tech, digital or IT industries you may even look forward to playing with these gadgets.

Microsoft Hololens (2016 – present)
Magic Leap (Release not yet announced)

But aside from such contexts how could we solve these privacy concerns and social interaction shortcomings?

  • Signals that indicate function modesMixed reality hardware should feature signals that indicate to secondary users that certain functions are in operation. If a user is recording a video feed via the headset, there should be a signal to indicate that video recording is in progress. If a secondary user is also wearing a headset this task becomes easier: you can display this information in their mixed reality content stream. How many signals you might need and which features are more important to highlight is unclear. However, rapid prototyping “fake” headsets to test different types of signals and noting the experiences of primary and secondary users could be very useful.
  • Privacy settings. An individual should have the right to determine when they are recorded. Unfortunately the horse has already bolted in terms of privacy and technology. Currently, people seem to have an expectation that they can record anything they like on their smartphone but at the same time expect a degree of privacy from other people with smartphones. Current Australian government legislation is not entirely clear about the circumstances in which individuals may record conversations or digital interactions. In the future, when everyone is recording everything all the time, can we really expect government agencies to police and enforce who is recording what? Or should these safeguards be built in to the hardware/software itself from the outset? Moreover, facial recognition software has developed in leaps and bounds since the release of Google Glass. Mixed reality will allow anyone to view a person and search for their digital identity online instantly. Useful at networking conferences but potentially dangerous at nightclubs or bars. Will there be the equivalent of a “mixed reality SEO scrambler” to stop people from being recognised?
  • Fair exchange of utility and value. Many people enjoy free digital services knowing (to an extent) that they are “paying” for this convenience with data about themselves and their online behaviour. History has shown that users are happy to give up a degree of privacy and initial social awkwardness (when trying new products or services) in exchange for what they perceive to be useful or essential applications (email, restaurant recommendations, travel directions, photo backups etc). Mixed reality software and applications will need to compensate users for any inconvenience with what is perceived to be adequate data, tools or services.
  • Mass adoption. Not so much a solution but an evolution of technology uptake. If everyone is wearing a mixed reality device, it evens out the playing field. The ability for individuals to control their own image presents the public with a strong incentive for rapid mass adoption of such technology: you need to own this hardware in order to be excluded from default recording.

These are very real issues that will affect users in the very near future. But what about beyond the current hardware iteration of augmented reality headsets or glasses? What might lie ahead?

Future iterations

An obvious evolution from headsets to glasses is contact lenses. Google has already made pioneering work in this field. They have developed contact lenses that monitor glucose levels in people with diabetes.

Hand holding - zoomed in
Google’s smart contact lens project.

I’m a massive fan of William Gibson. I was hooked as soon as I read his iconic cyber punk novel Neuromancer. I loved the character Molly Millions who had some very extensive augmentations, one of which included a pair of vision-enhancing mirrored lenses implanted within her eyes sockets.

Molly by NiekSchlosser

I was intrigued by what Molly would have been able to see. But I was also conscious that her augmentation was irreversible, thereby changing irrevocably the way she interacted with other people. It would have been startling to encounter her for the first time. Similar to engaging with someone wearing sunglasses indoors as mentioned earlier in this post, Molly would not have been able to exhibit social cues that are normally expressed through the eyes.

In 2008 I wrote a science fiction novel. Set in the near future of 2045 it explored themes of identity, memory, augmentation and the extent of personal agency. I found it was an extremely useful way of exploring the context and use cases for future technology. One piece of technology that I developed was mixed reality content streams accessible through “cat lenses”.

Many animals, including cats, have a third eyelid known as a palpebra tertia or nictitating membrane which helps maintain the health of the eye. In my novel, people had cat lenses surgically implanted within their eye sockets. But unlike Molly’s mirrored lenses which remained intact permanently, cat lenses could close and retract via muscles around the eye. In closed mode, the eyelid completely covered the eye. The user could view mixed reality content on the surface of the eyelid, displaying content over their full field of vision. When fully retracted, the cat lenses were no longer visible. This functionality also provided secondary users with a “signal” for cat lenses’ operational modes: lids closed (augmented vision on) or lids open (augmented vision off). The lenses themselves could change appearance depending on the user’s preference from completely opaque to partially transparent or displaying an image. The design also provided primary user’s with the ability to revert back to a “normal appearance” facilitating regular social interaction.

Summary

Although Google Glass joined the ranks of technology that was never mass adopted, it has served as an important case study into the way primary and secondary users interact with augmented reality. There are valuable lessons for mixed reality hardware design.

  1. The way hardware looks on the primary user.
  2. The perception of the primary user’s behaviour while they are wearing the hardware.
  3. Privacy of secondary users who are viewed by primary users.
  4. Signals that indicate the hardware’s functionality and whether they are in operation.

At the moment, mixed reality is little more than novelty. A source of momentary amusement. But soon, there will be an explosion of content ideation and development. A renaissance of art, science and engineering based on digital content interacting with real world elements. But the ability to capture, document and share those experiences will be a delicate balancing act between the primary user’s experience and the impact of their behaviour on secondary users around them.

Fantastic beasts… and gnomes in your garden

I think there’s something truly magical about the prospect of mixed reality. As children we grow up with fairy tales about fantastic creatures living in other worlds or long lost lands. Dragons, elves, unicorns, fairies… and maybe even gnomes.

My friend posted this photo on Instagram today.

Another photo revealed “windows” higher up the same tree.

What a wonderfully creative and charming way of engaging people’s imagination as they walk by!

There is something whimsical and intriguing to imagine that friendly magical creatures live among us, demonstrated in the enduring popularity of fiction from classic fairy tales to contemporary works such as J.K. Rowling’s Fantastic Beasts and Where to Find them. But there are few opportunities to experience these characters in real life. Although amusement parks like Disneyland include amazing buildings, life size characters and special effects, you have to travel to select locations.

There is an opportunity for mixed reality to create magic on our very own doorstep. It’s easy to imagine how mixed reality could extend this cute idea of a “fairy tree”:

  • A fairy living in your very own garden. She might come out in the morning and water her plants. On a sunny day, set up a deck chair and read a book.
  • A collective of fairies living in a local park. They come out of their homes, visit each other and shelter under leaves when it rains.
  • At night, the windows light up or a lantern is lit outside their door, to show that they’re home.
  • You could leave tiny notes in their mailboxes and the next day you’d have a response, in your own mailbox.

This has interesting implications for brands like Disney. Yes, people may still want to travel to Disneyland and experience the magical world of Disney through all five senses. But the company will also have an opportunity to design experiences that can play out within people’s own homes. Instead of just watching DVDs, dressing up and playing with figurines, children will be able to act out their favourite scenes with the characters themselves. (Extending this idea, you can also imagine the fun that people could have acting out the scenes of their own favourite movies.)

And if you’re in any doubt as to whether adults and not just children would be interested in these types of experiences and exchanges, here’s an example. Located in Bunbury, 200km south of Perth in Western Australia, lies a little village called  Gnomesville populated solely by gnomes. Over the years it has become a tourist attraction in it’s own right as people from near and far arrive, to discover the secret lives of gnomes.

I backed this Kickstarter: ZapBox

I’ve backed a Kickstarter project that hopes to create inexpensive mixed reality hardware and software. The project is called “ZapBox: Mixed Reality for $30” and it’s a very ambitious project by a team of developers based in the UK. I’m one of 1,854 backers who successfully funded the project this month.

zapbox
As per their Kickstarter page this is a functional prototype of what the ZapBox mixed reality kit looks like.

What is it?

Here’s a description of how it all works.

“ZapBox includes a set of physical markers (you guessed it, made of cardboard) that you place around your room. We call these markers “pointcodes” and we’ve designed them to be detected and identified in camera images very quickly, even when they are small or if there are many of them in view at once.

Thanks to the joy of maths [or math for our American friends] and our crack team of software wizards the ZapBox app is able to build a map of where all the pointcodes are in your room, and then use that map to work out your exact position and viewing direction using the pointcodes it can see at any time. We also use pointcodes on the ZapBox controllers to allow us to fully track them in 3D whenever they are visible to the camera.

As well as Mixed Reality, ZapBox can also offer Room-Scale Virtual Reality experiences. The virtual content can completely cover up the camera feed shown in the background, putting you into an immersive, entirely virtual environment that will react to your movement in space. Any motion of the controllers in the real world will also be accurately mapped into the VR view to provide natural interaction with content.”

Backing the project entitles me to the Developer + Beta Bundle which includes:

  • ZapBox Mixed Reality kit
  • Beta access to ZapBox app
  • Beta access to ZapWorks Studio for ZapBox
  • PDF print-at-home pointcodes and controller nets
  • Hosting for 10 experiences from your personal ZapWorks account

What am I going to do with it?

Just as Google Cardboard helped to bring virtual reality to the mainstream via simple, inexpensive and easy to use hardware and software, I think ZapBox has the potential to unleash the same possibilities for mixed reality.

Besides looking like a whole lot of fun, I’m hoping that ZapBox will also provide me with another tool to quickly develop prototypes for the mixed reality concepts I’m currently exploring.

When I’ve received the kit and had a play around with it, I hope to blog about my experience and fill you in on what I’ve learned.

How to make a rapid prototyping kit

Years ago, I bought a plastic tool box and started collecting useful bits and pieces for creating prototypes. Good rapid prototyping should be 3 things: cheap, fast and easy. So I procured items that would help encourage this approach.

protoypingkit02

Here’s a list of what’s in my tool kit. (Note, not everything fits in my tool box at one time but it’s very useful for carrying stuff to a prototyping session.)

  • Pens, pencils and markers/ textas.
  • Simple stickers, different shapes and colours.
  • Plain white paper for sketching. Long rolls of butcher’s paper or kids craft paper are great for when you need large sections of paper. Coloured paper is good too.
  • Post it notes. I have a love/hate relationship with these. They are very useful but can be over relied on during the brainstorming stage at the expense of moving on to create actual prototypes.
  • Graph paper is ideal for sketching levels, layouts and journeys. Hexagonal paper in particular is very useful for mapping movement because your characters can move diagonally. If you can’t find any locally, you can find these online and print them.
  • Cardboard – all kinds! They come in so many different thicknesses and are essential for 3D prototyping.
  • Adhesives. Tape, double sided tape, glue sticks, blu tack, hot glue gun. Depending on whether you need to stick things together temporarily or permanently.
  • Small figurines to represent players and NPCs (non-playing characters) preferably in different colours. For example very small toys or game tokens.
  • Cutting blade/scalpel.
  • Steel ruler.
  • Cutting mat.
  • Larger items that are useful for role playing like plastic cones, sticker labels, string, blu tack, larger sections of paper/cardboard etc

Where do I find all these amazing items?

Prototyping materials don’t have to be expensive. In fact, they shouldn’t be. Plus, the added benefit of cheap (or free) items is that you aren’t so precious with supplies and are more likely to discard prototypes for the next iteration. But I do have a few tips on how to creatively source items for your kit.

Your local Daiso, Poundland, reject or two dollar store. These stores stock basic household items and stationary for very low prices. When you start developing more complex 3D prototypes, get creative and think about repurposing items beyond their original design. For example, if you need a pen holder you don’t have to buy a special container marked “pen holder”. You could use a glass jar, teapot, vase, chopstick storage box, plastic food storage container etc

Recycling. When I was a kid I had a craft box. I saved every bit of cardboard, ribbon, and egg carton I could get my hands on. Even now as a designer I have one. And why not? It’s free and once you get started, you’ll start looking differently at objects you might once have just thrown out.

Op shop/ Thrift shop. Ok, I’m the first to admit it. I love visiting Op Shops. I love browsing racks, picking through shelves, and rummaging in boxes. I wonder about these objects that get left behind. For what purpose were they originally bought and why weren’t they needed any more? I love to recycle rather than buy something new. I also love repurposing items for uses different to what their designers may have intended. Mostly, I love the random serendipity of it all. There are hidden delights for one and all! So when you deck out your protoyping kit, be sure to check out all the various sections of your local Op Shop. For example, the pieces that come in board games are perfect as player figurines or tokens and the board itself may come in handy.

I hope this information helps to whet your appetite so you can go forth and create your own rapid prototyping kit. You can start collecting bits and pieces today! The more you prototype the more you’ll start to see everyday common objects as potential tools and parts for your next prototype. And that’s another benefit to creating a kit, it gets your brain working and thinking about how to create all kinds of prototypes.

A geospatial project: Maps for Lost Towns

I joined GeoGeeks earlier this year to meet other people interested in cartography and location based data. I think that mapping content to specific locations will be a critical feature of mixed reality. It opens the door to creating customised experiences based on the visitor and their needs. One set of geocoordinates could trigger a range of visual or auditory outputs depending on the visitor (demographics, psychographics, past behaviour), the context of their visit and other environmental cues (weather, time of day, local events, other people within range).

I’m basing this theory on my experience with digital marketing today. Online targeting is based on knowing who visits a website, their past behaviour and what they are looking for. This information lets an advertiser decide what type of content to serve that visitor whether it’s through a banner ad, header image or even the adjusted call to action wording on a button. Today, people navigate the internet by visiting websites or interacting with emails and apps which in turn direct them to specific websites. Tomorrow, people will walk around in real life and when they visit a landmark or enter a building it will have been fused with digital content. But we can’t overwhelm them with every single piece of information that’s available. That would be like a super cluttered website with heaps of ads and spam. (Keiichi Matsuda’s video Domestic Robocop demonstrates this point perfectly.) Instead we should only serve content that is relevant and helpful to that person.

But those awesome experiences of the future start with an understanding of how our current environment is mapped, what kind of meaningful information can be overlaid and what people do with this information.

Maps for Lost Towns

Based in Perth, Western Australia, GeoGeeks holds fortnightly meet ups to explore geospatial technology using open source data and tools. The group has members with experience in all kinds of industries including mining, resources, government, and energy. Some are GIS (geographic information systems) technical specialists while others (like me) are hobbyists or programmers who want to expand their skill set. I hope to learn more about the technical side of GIS and help work on some cool mapping projects.

One of the projects the group has already begun to explore is Maps for Lost Towns.  A few GeoGeeks discovered that the State Records Office of WA has a massive database of historical maps. These beautiful maps (some over 100 years old!) have been digitised and made available to the public to view online. The maps are protected by Crown copyright which expires 50 years from the date of publication. My understanding is that as long as the maps were published in 1966 or earlier then they can be uploaded for the purposes of this project. The maps could be a lot more useful and interesting, if we could understand visually where they fit into the maps of today. This is technically possible but the difficulty lies in the fact that there are thousands of maps to review.  So the goal of the project is to georeference these maps through crowdsourcing.

Open source mapping tools

Matching or “rectifying” a map against today’s streets or satellite imagery can be done through open source software such as MapWarper a “map warper/ map georectifier and image georeferencer tool”.  Sounds fancy, but it’s actually quite an easy to use platform.

  • Upload a map.
  • Identify several control points on the map. These are points which you can confidently correlate to current day landmarks or streets.
  • The software rectifies the map by using the control points as anchors and then laying your map on top of current day coordinates.

Here’s an example from the website. The image below is a map from 1873 of the City of Roxbury in Boston, Massachusetts.

Map from Map Warper "1873 City of Roxbury, Part of Ward 15".
Map from Map Warper “1873 City of Roxbury, Part of Ward 15”.

The map was uploaded by a Map Warper member, Nebojsa Pesic, who identified 18 control points  and then rectified the map. You can see from the next image how the old map aligns with the current map. Cool, right?

roxbury2
Rectified map of “1873 City of Roxbury, Part of Ward 15” via Map Warper.

So being able to rectify Western Australia’s historic maps is possible (and fun!) but how do we encourage people to give it a go themselves?

User experience

Before creating a platform that allows visitors to play around with maps, I think it’s important to identify the kind of experience we want them to have. Why would they want to participate in this project? What are there expectations? What do they want to get out of it? Exploring these questions will give us information that we can use to create an ideal experience for them.

Firstly, I took a look at the kinds of people that we might like to engage with and encourage to use the site. I developed three user personas: Social Butterfly, Community Gamer and Community Minded. Each persona interacts with technology and digital content in a different way but I felt that each could gain something positive through a “good map experience”.

gguserpersonas

I reviewed the current Map Warper Information Architecture (IA) to see what kind of functionality was available to users of that platform. I mapped this out in the IA diagram below. This helped me to understand the following:

  • What kinds of things people could do with maps (search, upload their own maps, rectify, share maps, comment on them).
  • The meta data available on each map (location, publisher, titles, tags, dates).
  • How and why one might search a database of maps. For example, maps from a person’s home town or state, looking for maps from a specific timeframe, or types of maps like aerials.

ggmwia

Reverse engineering the website’s IA also gave me insight into how they had laid out their functionality and hierarchies. Next, I took each user persona and created potential user flows. I kept it pretty simple so that it could be used as a starting point for discussions with other GeoGeeks.gguserflows

I shared these initial concepts with the group along with some additional discussion points:

  • Do these three user personas reflect the main types of people who will use the website?
  • Do these user journeys reflect their needs? And the corresponding website functionality required?
  • To what extent will we use or adapt the original Map Warper IA?
  • Will the website be mobile responsive? Would be good for user personas Social Butterfly (to view content) and Community Minded (to view, upload and share content). In the future users could access cool content while they are mobile and have location services turned on.
  • Do we have maps for just WA or other states too?
  • Will all users have to log in? Or just those that want to match maps and add content? A logged in state will be necessary for saving personal information, activity and gamification elements.
  • Will users be able to upload their own maps?
  • Will we meet minimum Australian accessibility standards? http://www.australia.gov.au/accessibility
  • Should we include a social sharing plugin? I think yes! Map Warper uses http://www.addthis.com/

Although this is a very simple start to a complex project, I hope that this input will help progress the project and I look forward to helping it develop further!