The Rise of Digital Ethics in 2019

2018 was a Pivotal Year for the Tech Sector, and Digital in General

The luster of Silicon Valley is wearing thin with revelations of fake news, societal manipulation, threats to public health, elections, security and beyond.

Personal data breaches of Facebook were only the beginning. This was the year the EU implemented the General Data Protection Regulation (GDPR), a sweeping legislation that affects every company that processes EU citizen data, putting data protection practices (and sanctions) at the forefront of business agendas across the globe. We hear warnings every day of more pernicious and diverse cyberthreats, spanning attacks on our political, healthcare, energy, and corporate infrastructures.

2018 also saw unprecedented fallout and employee backlash within Google, Amazon, and Microsoft, challenging the companies’ support of AI applications for military, law enforcement, and surveillance. And, that’s not to mention the movements surfacing around mental health and digital “detox.” In July, France banned smartphones in schools; the World Health Organization officially classified “gaming addiction” as a mental disorder; and enabling digital Time Well Spent became strategic priority for Google, Apple, and Facebook.

A Timeline of News Events in the Last 12 Months

Image source: AI Now (click for enlarged view)

We’ve witnessed a growing tide of alarming events, research and abuses sparking international discourse and corporate concerns. What are the political, social, environmental and health impacts of ubiquitous data and technology?

In 2019, the Role of Brands Shifts from Reactive to Responsible

Read more about this trend in our latest research: Innovating with Impact in an Era of Accountability, a partnership with Edelman Digital and Kaleido Insights. This is a must-read for forward-looking brands!

The very notion of AI (understanding and reproducing human cognition) forces us to hold a mirror up and re-evaluate the biases and assumptions embedded in the data we use to train AI models. It forces us to consider implications of new digital interfaces like voice and facial recognition and reconsider structures for accountability when we can’t solicit explanations from machines. It forces the arbiters of AI, enterprises in particular, to ask an untold number of societal questions they have yet to confront. 2019 marks the year a new business competency is born: a formal approach to digital ethics.

As AI infuses business, brands must develop a new ethics function with the objective of shifting from mere risk avoidance to forward-thinking planning and counter-e orts across key areas (e.g., design, data protection, guidelines and processes and training). Enterprise preparation for AI has centered almost exclusively on data prep and data science talent. But enterprises that fail to ready the broader organization–chiefly, people, processes and principles–don’t just stunt their capacity for good AI. They risk sunk investment, jeopardize employee trust and face brand retaliation–or worse.

Our research addresses the following:

  • How to define digital ethics
  • Core program objectives and roles
  • Three pillars of enterprise digital ethics programs
  • Integrating ethics into tech stack and AI workflows
  • Steps brands can take
  • Critical questions for leaders, product teams, designers, and employees
  • Leading consortia and industry groups

We’re tracking developments closely, researching industry best practices, and supporting clients through these questions. Feel free to reach out with questions or feedback, and in the meantime, access our definition framework in Innovating with Impact in an Era of Accountability.



Leave a Reply

Your email address will not be published.