In June 2011, the United Nations Human Rights Council approved new Guiding Principles on Business and Human Rights. This was really a big thing. Not only because governments, business and civil society were able to agree on a framework to enforce human rights. But also because this framework has proven to apply very well across a large variety of topics. I’ve worked extensively with tech companies and here are 5 lessons for tech companies to apply and respect human rights in the age of digital transformation and growing scrutiny from governments and users
The Snowden and the Facebook scandals related to privacy have developed awareness of users, activists and governments about how technology has huge impact to potentially generate human right abuses. That sounds obvious today, but wasn’t that obvious a decade ago.
In August 2011, BSR published a report providing a perspective on how to apply these UN Guiding Principles to the information and communications technology (ICT) industry. As part of BSR, I’ve undertaken significant work to put our thinking into practice: We’ve advised on the writing of human rights policies, facilitated engagements with human rights stakeholders, and undertaken human rights impact assessments (HRIAs).
Big Data, IoT, Virtual Reality, Artificial Intelligence are shaping very different environments to protect and respect human rights. Here are 5 lessons I’ve learned across my projects to manage some of the complexity at stake with the profound digital transformations underway.
Put Users at the Forefront
When it comes to designing products, users are always put at the forefront. When it comes to understanding the way they may use, and misuse products, that becomes another story. Government surveillance using mobile phone and roaming data has proven to be effective to track opponents for instance. Development of trolls has been a good way to use social media and boost discussions and social interactions on topics impacting perception of citizens and voters on multiple societal topics – impacting elections, or acceptance of vaccination campaigns. Digitalization and automation of business environments are shaping very different working environments: a robot is now the boss giving instructions to a growing number of workers across the world…
From a different perspective, I’ve also observed people using their own judgment to play with algorithms, entering wrong information to manipulate outcomes. This may or may not work, but this social dimension can have a strong bearing on the way machine learning system may process information and suggest valuable decisions.
All in all, innovations driving digital transformations are often coming with good intentions. But problems may primarily occur in the way users actually use or misuse products and services. An ongoing attention is really required on this front.
Develop Results-Based Ethics Approaches
There have been multiple scandals with algorithms developed for instance to optimize distribution of large number of students or patients to multiple universities or hospitals. These types of algorithms are well known. They have been extensively tested since the 1950’s. Between profile, geographies, preferences to match with high numbers, processing data may lead to unforeseen consequences. For instance, cases of discriminatory practices have been documented, based on location, last name or other factor capable to lead to discriminatory practices. Algoritms have been partly improved. Truth being overall that even the best experts are unable to fully understand how algorithms operate and draw conclusions…
Artificial intelligence will play a growing role, and increasingly impact our lives. I am stating the obvious here. There is an ongoing debate whether AI will overall drive more objective decisions primarily driven by facts and data at very large numbers. Or instead whether AI will exponentially generate discriminatory practices among other ethical issues using material inherently subjective and full of unethical and subjective content to generate trends and predictable conclusions…
Consequentialism or results-based ethics says that right or wrong depend on the consequences of an act, and that the more good consequences are produced, the better the act. That’s basically calling for a close review of consequences to act upon. That’s different from exclusively observing how users may misuse products. That’s about accepting that machine learning will remain a black box human people will never be able to fully understand and correct. That’s shaping a very different approach to technology and innovation, and automotive industry may be a good illustrative example.
- “In the past”, car makers used to make cars with good proactive approach to mitigate especially safety risks. A car would be authorized only if able to comply with safety standards. Whenever accidents would occur, there would have been a review and a corrective process to continuously improve satefy
- With machine learning and the development of complex algorithms, autonomous cars are entering a different operating environment. We will just never fully understand and master decisions. Assuming our societies allow autonomous cars to circulate at large scale – and that’s already the case across multiple geographies in the US or Europe – then we’ll have to accept non perfectible consequences. We’ll have to reactively adapt ethics to real world cases without pretending to fully correct algorithms. Insurance providers or people losing a relative in an accident will clearly see the difference coming with consequentialism…
Cooperate with Business Partners
Digital transformations are creating a new space of vulnerability and dependance operating with business partners in ecosystems. Privacy and data protection is good example. An healthcare company may collect data from patients to drive product development activities. But it is very likely these sensitive data will actually be collected, stored or even shared with business partners for multiple reasons going from outsourcing to strategic partnerships.
What do company really know about the practices of their business partners? Of course they sign contracts, codes and may conduct audits. Point remains that digital transformations creates a clear hotspot of vulnerability. A cyberattack witnessed by a business partner may force a company to go bankrupt.
Digital officers are increasingly appointed and mandated to identify sensitive data and ensure secure management. This clearly is going to be a major area of vulnerability where companies have overall limited leverage and may pay the consequences of infringements at very high cost…
Explore Social Impacts
Most tech innovators are fascinated about what they think they are offering to societies. They don’t really care nor explore the broader impacts coming with their innovations. AirBnB or Uber didn’t really care nor had a clue how the success of their solutions would impact municipalities, other workers and local communities… This poor understanding of social impacts has generated social resistance, and made penetration of AirBnB or Uber solutions more complicated across a large number of markets.
From a human rights perspective, respect is coming with capacity to respect traditions, cultural activities and basically the way communities can keep going with the way they want to lead their lives. Exploring social impacts of solutions is not only a way to mitigate risks of resistance. That’s also a good way to explore whether and how technologies are infringing cultural, traditional or other rights. In other words, exploring social impacts is good way to understand the kind of society we are shaping through digital transformations – and whether that’s intended or not.
Get of course the Basics Right!
Of course, respecting human rights comes also with more traditional spaces related to responsible employee and supply chain management: harrassment, working and living conditions of workers, due diligence of sourcing activities among other complex and challenging social activities.
That said, I also believe getting the basics right will increasingly need to give a strong emphasize to the concept of inclusion. Here again, if we put user at the forefront, technology companies can explore whether and how product design is driving gender biaises – a fascinating topic. Or whether and how products participate in digital inclusion of every segment of our societies including most vulnerable people.
Author of several books and resources on business, sustainability and responsibility. Working with top decision makers pursuing transformational changes for their organizations, leaders and industries. Working with executives improving resilience and competitiveness of their company and products given their climate and human right business agendas. Connect with Farid Baddache on Twitter at @Fbaddache.