(This article was first published on medium on Dec 29,2018 by me)
Humane technologies are technological solutions that put concern and compassion for humanity at the core of the solution.
Automated decision-making models are moving from research environments to real-world environments, creating new sets of social challenges. Despite how intelligent and mathematically accurate autonomous systems may be, they run into some problems when interfaced with a world populated with unpredictable human beings. Recent examples of this include Cambridge Analytica, self-driving car crashes, security breaches at Facebook and Google. Here are the technology events that raised ethical concerns just in the last two years.
Worldwide, companies are racing to use big data analytics and computation power to gain a competitive advantage. More data points mean more raw material and variables to build more accurate predictive systems. In this competition to acquire more and more data and automate decision making, technology companies often overlook one crucial aspect — the real-life consequences of these decisions on living, breathing human beings.
“Algorithms don’t make things fair. They repeat our past practices, our patterns. They automate the status quo” — Cathy O’Neil
This means that autonomous systems are amplifying desirable AND undesirable consequences unless actively addressed.
What if human needs and goals are incorporated into the very core of autonomous solutions as they are built?
What principles could guide the development of humane technology?
Building an ethical framework for Humane Technology
What do we mean by ethics when we refer to technological artefacts? While interviewing 13 technologists around the world, one ask across the board was for a common vocabulary to identify the ethical implications and ways to address these concerns in a time efficient manner.
As part of my MA thesis at Hyper Island, I derived a framework to break down ethics into six considerations that acknowledge human concerns with corresponding principles to build ethically aligned technology. The framework was arrived at by overlaying the psychological goals that create meaningful digital experiences (Hassenzahl et al., 2010) with known concerns of emerging technology (Stahl et al.,2016).
Ethical lenses and principles for Humane Technology
The ethics for humane technology framework provides lenses to understand human rights in the digital age, to understand the technological phenomena that threaten these rights and principles to build technology so as to create a beneficial future for humanity. It consists of six lenses to understand implications on human life and principles that can be used to counter them in your product development process.
Wellbeing is about aligning system goals and incentives in the best interest of humanity. It examines which habits are promoted and how the business model supports the stated goals.
Wellbeing can be enhanced through the following principles:
The user’s best interests guide the system goals.
The user is informed and made aware of the system goals.
Habits and user experiences are designed to enable competency and connection.
The business model is built to support the human outcomes of the solution.
Inclusion is about adapting to the varied capabilities of the user, embracing diversity and creating a sense of belonging. This extends beyond the abilities of the user, adjusting for factors such as digital literacy, and structural inequalities.
Inclusion can be enhanced through the following principles:
Diverse capabilities of the target user base are mapped and accounted for in the system design.
Different groups of users are represented in the dataset used to train the algorithm.
Extra attention is paid to addressing inequalities when incorporating vulnerable communities into a database or service.
Target users’ perspectives are sought and incorporated into the system design.
The team has representatives from the target groups.
Privacy is about honouring the user’s ownership of their information in the way it is collected, analysed, processed, interpreted and shared.
Privacy can be enhanced through the following principles:
The user owns their own data.
The user controls who has access to their data.
The user is informed about how their data is used.
User’s permission is acquired when access changes.
Security is about protecting the user’s psychological, emotional, intellectual, digital and physical safety.
Security can be enhanced through the following principles:
Sensitive data is stored in separate, highly secure databases.
Failsafes are in place in the event of technical/system failure.
Security vulnerabilities are proactively explored addressed.
Measures and procedures are in place to alert users to help with contingencies in the event of a data breach or hack.
Accountability is about creating transparency in how decisions are made, biases are addressed and creating pathways for the user to challenge such a decision.
Accountability can be enhanced through the following principles:
Biases are tested for and addressed.
The decision-making process can be explained in a manner that the user can understand.
Avenues are in place to challenge and counter the decisions.
Trust is about creating a reliable environment that promotes authentic engagement.
Trust can be enhanced through the following principles:
The content, entities or claims are verified for authenticity.
The product or service is trustworthy in the eyes of the user.
The company’s stance or principles are accessible to the public.
Ethics as a strategic advantage and an ongoing process
Gartner has named digital ethics and privacy as a strategic trend for 2019.
According to the Gartner report, “By 2021, organizations that bypass privacy requirements and are caught lacking in privacy protection will pay 100% more in compliance cost than competitors that adhere to best practices.”
The technology community is getting increasingly concerned about ethical implications and are formalising practices to address ethical issues proactively. Google announced its ethical principles and a 6-month review of the principles in action. Earlier this month, Salesforce announced that they hired Paula Goldman to lead their Office of Ethical and Humane Use. While these actions may seem like mere drops in the ocean with regards to what is needed — the amount of unforeseen and unintended consequences that need to be tackled — they are definitely steps in the right direction. And they are setting the precedent for ethics to have a place on the table.
Thanks to Rasagy Sharma, Tash Willcocks, Urska Ticar and Phil Hesketh for their inputs on this piece.
— — — — — — — — — — — — — — — — — — — — — — — — — — — — — - These principles are a step towards a common vocabulary and are not meant to be conclusive. To know more about the framework or read the research behind it, drop me a line. If you’d like to understand the ethical consequences of your product and would like to embed these considerations into your product, let’s talk.