Protecting human rights in the face of rapidly changing technology and innovation

It’s not uncommon while scrolling through one’s social media feed to come across clickbait headlines and fear-inducing sound bites foreshadowing the imminent collapse of civilisation that will be caused by some type of emerging/disruptive technology like artificial intelligence, cryptocurrency, 3D printing, drones, robots or self driving cars.

Often absent, however, is a thoughtful and considered discussion around the development of these technologies and how to avoid worst-case scenarios. But where to begin? It would be almost impossible to create a set of rules or guidelines to encompass every single emerging technology, every possible iteration of that technology and the myriad of subsequent applications. A far simpler and practical approach is to focus on the real issue behind most of the fear: how do we protect ourselves from potential harm? To this end, the existing framework around human rights could provide an ideal place to start.

Human Rights and Technology

The Australian Human Rights Commission launched a human rights and technology project earlier this year which included the release of the “Human Rights and Technology Issues Paper”. This has been followed by nationwide consultation with invited industry experts. This week the roundtable discussion was held in Perth with representatives from the Australian Human Rights Commission including current Human Rights Commissioner, Edward Santow.


It was a pleasure to be invited as a representative from Voyant AR and participate in this discussion, exploring how technology can impact on human rights and what can be done to ensure that human rights remain protected in the future. Industry experts included industry, government and civic organisations with representatives from a wide range of sectors including artificial intelligence (AI), augmented reality (AR), computing, blockchain, legal, web accessibility, and disability services.


A central theme in our discussions was the idea of responsible innovation particularly in the areas of

  • AI in decision making
  • How people with disability can experience the benefits of new technology
  • How technology impacts specific groups such as children, women and older people and
  • How can a regulatory framework protect human rights without stifling creativity/innovation and also remain flexible enough to encompass rapid change.


As an AR Designer who has studied Psychology and International Relations, I believe responsible innovation is critical in the design of user experiences for existing and emerging technologies. This includes consideration of a broader context of who might use that technology, how they are able (or unable) to access it and what the impact of their interaction might be.

Admittedly, this is not an easy approach to take. However, the release of the Human Rights and Technology Issues Paper is both timely and thought-provoking. In the face of fast-paced technological innovation and change, how can we ensure that fundamental human rights are protected?

Human rights and technology are inextricably intertwined

When most people think about human rights they may picture a foreign country, very different from their own, with extreme human rights violations such as human trafficking, unlawful detainment, or ethnic cleansing.

But human rights are very much a part of everyday living. It may not seem obvious at first, but whenever technology can direct, guide or alter human behaviour, there is a potential impact on our human rights.

  • Right to equity and non-discrimination. Are people discriminated against based on their race, colour, religion, sex, secuality or disability? Some dating app and websites have been accused of allowing racial profiling.
  • Freedom of expression. How do we balance open communication against the dissemination of hate speech and the proliferation of fake news?
  • Right to benefit from scientific progress. Can all sectors of the community access and use new technology? Barriers can include cost, regional location, language, cognitive ability and access to the internet.
  • Freedom from violence. How could a technology facilitate or incite violence and abuse? What design improvements or safeguards can be put in place to protect users?
  • Accessibility. Can people afford or access the technology required to fulfill basic tasks like pay bills or apply for a job? How can we make technologies and processes as inclusive as possible?
  • National security/counter-tourism. How do we balance security and against threats, without infringing on human other rights such as privacy?
  • Right to privacy. The current debate on opting out of My Health Record demonstrates the confusion Australians have around the type of data the government wishes to retain, how it will be used in the future and our right to medical privacy
  • Access to information and safety for children. How can children benefit from technology while protected from exploitation? From sharing photos and personal information to long term impact on emotional development.
  • Right to fair trial and procedural fairness. How could AI be used in decision making during the judicial process? Is there still a possibility of gender and race bias in AI?

Our seemingly benign every day interaction with technology – like checking email or scrolling through Facebook – is actually the product of millions of choices made during ideation, design, prototyping, iteration, development and deployment. And each choice has the potential to erode human rights or to strengthen and protect them.

The Human Rights Issues Paper includes a quote from the current Australian Human Rights Commissioner that neatly sums up the idea of responsible innovation, “Technology should exist to serve humanity. Whether it does will depend on how it is deployed, by whom and to what end.”

Responsible innovation needs to be taken seriously by anyone working in technology and initiatives like the government’s Human Rights and Technology project should be applauded and supported. We may fear robots, AI and fake news but ultimately it’s people who are building them.