Big data and AI call for a new business competency: Ethics.
Artificial intelligence (AI) and machines’ ability to “learn” marks a new chapter in digital transformation; it breathes new life into the potential for unstructured data and software and marks a profound shift in interface and customer experience. It also introduces unprecedented risks and societal questions enterprises have yet to confront.
2018 marked a pivotal year in tech, a year in which such questions and risks have become mainstream. From revelations of fake news and mass manipulation, to ever-more-pernicious cyberthreats, to unprecedented regulatory moves, tech is in the crosshairs. Leading organizations are rising to meet new threats, while re-assessing the very *purpose* for digital transformation.
Despite a widespread crisis of confidence, enterprise preparation for AI has centered almost exclusively on data prep and data science talent. But true enterprise preparedness for AI must ready the broader organization, chiefly people, processes and principles. Companies across every industry are vying to assemble the teams, tools, and skills not only to deploy machine learning, but to address the thorny and disorienting issues around data collection, sharing, consent, regulation, transparency, and far beyond.
Now more than ever, the mass automation of big data and AI call for a new business competency: a formalized and grounded approach to ethics. Our research looks at who is doing what, and where the patterns are across these emerging programs.
We will explore each of these pillars in depth in upcoming series of posts. Check out the report on which this post was based here, and as always, feel free to reach out with additional questions.