Impact Analysis: Business Impacts of Deep Learning

By, Jessica Groopman, Jaimy Szymanski, Rebecca Lieb, and Jeremiah Owyang of Kaleido Insights.

The ever urgent business question of “what do we do with all of these data?” is colliding with our endless fascination for technological biomimicry of human intelligence. While this could be said for much of artificial intelligence (AI), deep learning is unique in that it is a computing construct based loosely on the architecture of the human brain. Meanwhile, businesses are struggling with massive volumes, varieties, and velocities of data generated each day; thus learning (from that data) becomes the most strategic objective in order to justify investment of digitizing in the first place.

Kaleido Insights’ methodology for analyzing emerging technology assesses the impacts on humans, on businesses, and on the ecosystem. As part of our ongoing coverage, we’ll be covering a series of topics using our methodology to help business leaders first understand, and then see beyond the bright and shiny to cut to what matters.

In each post, all Kaleido Insights analysts conduct a joint analysis session around one topic (e.g. technology, event, announcement, etc.). In this post, we analyze the business and organizational impacts of deep learning.

Topic: Deep Learning

Example: Google Brain and DeepMind; IBM Watson, H20.ai, and many others

Impact Analysis: Businesses and Organizations

First a definition and a distinction: deep learning vs. machine learning

A subset of machine learning, deep learning (DL) is distinct in that it is composed of multiple layers, typically between 10 and 100 (hence ‘deep’) in contrast to machine learning algorithms which tends to only have one or two. Each layer of the network is responsible for the detection of one characteristic about the inputs, and computations at each level base assumptions/build upon previous levels, which allows the network to “learn” more nuanced and abstract characteristics to determine the output.

While DL is based on the biological brain where any neuron can connect to any other neuron within a certain physical distance, artificial neural networks have finite layers, connections, and directions of data propagation. When exposed to massive amounts of data, deep learning systems can develop basic pattern recognition, enabling algorithms to train themselves to perform tasks and adapt to new data. This ability to draw patterns and clusters from within the data means DL can infer outcomes without explicit instructions. That said, not all deep learning is unsupervised; many companies supervise DL with labeled data.

There are numerous types of artificial neural networks in use (and more developed every day) and a growing number of deep learning frameworks available open-source. For a deeper dive into these categorizations, check out the Neural Network Zoo by the Asimov Institute.

While deep learning has been around for decades, its recent resurgence (and commercial viability) is the result of three emerging forces: colossal data, significant improvements in hardware speed, and better algorithms. These drivers are why deep learning has emerged as one of the most promising, if perhaps controversial, analytics techniques in the world of technology.

Use Cases & Business Models

Part of the intrigue and difficulty of understanding deep learning is that it is application-agnostic; it is an enabling technique for any kind of data analysis. Deep learning can be applied to, and is particularly well-suited to autonomously extract nuance from any (structured or unstructured) data set large enough to identify statistically significant patterns. It is also not typically used alone, but rather in tandem with other data analysis techniques such as natural language processing, computer vision, machine learning, and others. What follows are sample use cases, with an emphasis in how deep learning contributes:

Panasonic recently announced a deep learning-based facial recognition system used to match faces with records in order to streamline immigration procedures. Source hyperlinked
  • Image and object recognition: Analyzing very large data sets to automate the recognition, classification, and context associated with an image or object.
    • Sample applications: Photo tagging; facial recognition; video analysis; simultaneous localization and mapping; obstacle avoidance; satellite imagery analysis
  • Text and speech analysis: Analyzing large (often unstructured) data sets to recognize, process, and tag text, speech, voice, and language; also used to serve recommendations based on tagging
    • Sample applications: Voice recognition; language translation; event scheduling; search engine queries; social media analysis and curation
  • Risk assessment: Analyzing very large and heterogeneous data sets to identify patterns associated with malicious, criminal, non-compliant, nefarious or otherwise threatening activities; also used to predict recommended courses of action and in some cases trigger specific actions
    • Sample applications: Identity fraud; transaction fraud; malware detection; cyber threat analysis; anti-counterfeit; procurement fraud; insurance risk analysis

 

At Facebook’s 2018 F8 conference, they discussed how they are using AI to review and vet content to ensure bad players are rooted out.
  • Content curation: Analyzing large, heterogeneous data sets to categorize, process, triage, personalize, and serve specific content for specific contexts
    • Sample applications: Social media feed curation; Image placement in advertising; customer support or chatbot interactions

 

Retailer Stichfix uses deep learning to analyze styles and design new clothes

 

 

 

  • Process optimization & product management: Analyzing large data sets to identify and anomalies, cluster patterns, predict outcomes or ways to optimize; and automate specific workflows
    • Sample applications: predictive maintenance; product design; predictive retail; scenario planning; robotic process automation; weather forecasting; fleet routing

 

 

 

 

 

Most DL applications today are used to drive efficiencies, cost savings, and enhancements to existing business models, rather than generate net new revenues. But, many expect this to change. While the applications above introduce incremental, if very practical, advancements, they are the tip of the proverbial iceberg. DL-driven innovations today are laying the foundation for more disruptive applications of tomorrow, such as driverless cars, autonomous machines, personalized education, precision medicine, and many other capabilities previously reserved for science fiction.

In future scenarios, many expect DL will be applied to software development itself, wherein DL systems iteratively code and design products and services themselves, perhaps beyond the realm of current human imagination or engineering capabilities.

Exciting implications aside, one of the most important things businesses must remember about deep learning (and all of AI) is that successful applications are extremely narrow. We may dream big, but artificial neural networks are NOT biological neural networks and machines are trained and inference based only the data we feed them, not the world around them.

Challenges & Risk Mitigation

Deep learning and AI in general introduce a wide range of challenges and risks to businesses. The following underscore the importance of education, training, governance, and generally conservative application, especially in these early days.

  • Confused business leaders struggle to understand deep learning, and fully define AI in general. The technology is mathematically complex, application-agnostic, may or may not require infrastructural investments, and moving from education to ideation to action is easier said than done. Concerned and resistant employees are also important to support.
  • Disingenuous vendors are infamous for overselling their solutions. In the case of deep learning, some are pitching DL as the magic bullet for cleaning up (even finding hidden patterns and monetizing!) big data warehouse investments. While DL is useful for mining unstructured data sets, companies should be wary of the many ways AI and DL are oversold!
  • Talent shortage. The vast majority of businesses lack dedicated data science resources (if any). Not only that, the limited pipeline of these skillsets are being actively recruited (for starting salaries between $150-400k) by virtually all of the largest technology firms.
  • Data swamps. An estimated 80-90% of enterprise data is not used, instead data sits scattered, unstructured, and unused across data marts, lakes, warehouses, etc. Given good, clean, and big data is the requisite for good AI, this is a significant and resource-intensive hurdle to overcome before effective deployment
  • Explainability. The ability to see “inside” artificial neural networks not only to understand why an outcome was produced, not to mention which parameters, layers, and nodes carried the greatest weight in the decision-making, remains poorly understood. This is problematic from the enterprise perspective in terms of low accountability, auditability, regulatory compliance, anti-discrimination, consumer protections, and erroneousness in the model.
  • Ethical issues. Issues with explainability only exasperate the myriad ethical questions facing AI: algorithmic bias and encoded judgement, inadvertent discrimination and disenfranchisement, AI-based content forgery and fake news, surveillance and privacy violations, wealth distribution, job displacement, autonomous machines, and beyond.
  • Bots as brand extensions. As businesses begin to offload more and more customer-facing workflows to bots and AI agents, they represent new touchpoints for brand engagement and relationship-building, but when they go awry, they risk a botched experience, public relations (PR) crises, misrepresentation, or backlash.
  • New liabilities can also surface when AI and DL are used to collect and mine sensitive information. For instance, if a user shares intention to kill themselves or others via a voice agent or chatbot, does the brand have an obligation to report the information? To whom? Who decides?

Perhaps the most immediate challenge and risk is overly inflated expectations around what AI and DL can and cannot do. It is therefore rarely stated, but well advised to set the bar low, even evangelize the limitations of AI.    

Organizational Structure and Leadership

Our research finds deep learning initiatives tend to come out of IT programs or data analysis teams working within specific business functions. Initiatives can also emerge from horizontal teams such as cross-functional Digital Centers of Excellence (COEs) or dedicated Innovation programs designed to activate data and emerging technologies to enhance business objectives.

While structures supporting AI programs vary depending on the size and complexity of the organization, AI (including deep learning) presents a range of structural considerations:

  • Spokes, business functions, or vertical lines of business are essential testing beds for DL. These groups have the greatest context for where AI techniques could be applied for maximum immediate impact. The depth of domain expertise in specific vertical programs is essential for both data preparedness and anticipating impacts on people (e.g. employees or end customers) and products or workflows.
  • Hubs, or centralized centers for corporate organization and planning are also important for strategy, assigning meaningful intention to efforts and metrics, governance
  • Cross-functional programs, unlike hubs or spokes, these are groups whose core role is to connect the two, specifically to drive innovations more efficiently, reduce technology and vendor redundancy, and increase collaboration, interdepartmental workflows, etc. As AI programs evolve (from very narrow efforts in single spokes, to more enterprise-wide learning and optimization), these liaisons are critical.

Regardless of who is driving AI within the organization, the principal focal point of these programs is getting off the ground, which in practical terms means aggregating data from and into data lakes, then sorting, tagging, and distributing cleaned data for training models. Unsurprisingly, this was a big focus at IBM’s recent Chief Data Officer event in San Francisco.

Governance, Process, Compliance

The nature of AI is such that governing its application is not the job of one or a few, but must be an ongoing and coordinated effort across many. Organizational structures supporting AI and DL (outlined above) should make ongoing efforts to expand platforms so that more skillsets can contribute to the optimization of the programs. Key roles include:

  • Executives. In the context of AI, executive championship is important for building buy-in across teams, identifying areas of risk (cultural, labor), and allocating proper resources and investment accordingly
  • Data scientists & engineers. While data science and analysis is an essential skillset for DL, data scientists can’t exist in a vacuum; Beyond working in tandem with engineers and developers, data scientists are critical for training, monitoring, and optimizing deep learning models and inferencing. They should also be trained on asking the critical business questions, as well as design, legal, ethical questions.
  • Product leaders/managers are often the interpreters between technical and business developers; In the case of AI, these are often the linchpins for translating technical capabilities to business needs and vice versa.
  • Front line employees are, more often than not, critical enablers for any kind of large-scale deployment. Those handling the everyday affairs of current workflows are also the ones with the deepest familiarity of current pain points, customer sensitivities, tedium, and needs.
  • Subject matter experts of all varieties are also important for DL optimization. In some cases, this could be front line employees, in others in could best suited to translate highly specialized domain expertise into DL-based decision making (e.g. doctors, lawyers, accountants, scientists, etc.)
  • Designers are also essential across the entire deployment lifecycle, from designing use case workflows to making sure platforms, dashboards, and analytics portals are user-friendly.

The gaping lack of data science talent in most companies means that training, upskilling, building easy-to-use interfaces and tools, fostering collaboration across teams become essential. In the evolution from super narrow applications supporting single business processes, to broader enterprise optimization made possible by aggregating data and learnings, use cases in one area could catalyze applications in other areas. ‘Democratizing’ access and ‘diversifying’ who can contribute to these efforts are critical processes to define.

While a requirement for regulated industries, governance practices such as keeping copies of data, logging events associated with training, versioning, and modifications, reporting, or documenting how to improve, render more value than merely avoiding penalties.

Change Management

Machine learning is a nascent technology, deep learning even more so. As companies begin to deploy these techniques across the organization, it can be easy to overlook the importance of change management. Furthermore, leaders don’t just have to think about change management in terms of people and organizational culture, but in terms of the technology and models themselves.

From a people and organizational standpoint, AI presents unique barriers—what we sometimes call ‘AI’s cultural stigmata’—such as job displacement, killer robots, as well as well justified fears and doubts associated with algorithmic bias, surveillance, and the limitations of machines to perform cognitive tasks. Businesses must address these head-on, even emphasizing the limitations of the technology and most importantly presenting clear intentions around the objectives, processes, short and longer term strategies for AI-based applications.

From a more technical standpoint, the systems, compute, techniques, and models themselves will require change management. No AI will ever be a ‘set and forget’ technology, not only because such systems are designed to learn, optimize, and predict, but because the tools and capabilities within the AI space are rapidly advancing. New frameworks and libraries supporting deep learning applications, new datasets, new APIs, new algorithms, new tooling for data lakes, cloud-based, edge, and chip-level compute are emerging all the time. Google’s recent developments around AutoML (machine learning models designed to automate the machine learning design process itself) are just but one small, but potentially significant example… As each introduce new narratives for AI performance, companies must monitor these developments closely.

Data Lifecycle

The axiom ‘trash in-trash out’ is never more true, or potentially dangerous, as in the age of AI. The difficulty of explaining and interpreting artificial neural networks underscores the importance of monitoring the cleanliness, integrity, and appropriateness of the data flowing in across the lifecycle.

This framework was conceived by data science executive and advisor, Monica Rogati.

Think of data preparation for data science initiatives as a sort of ‘hierarchy of [data science] needs,’ in which data isn’t just collected and fed into models, but must undergo extensive and ongoing processing to convert it from raw input to reliable, cleaned, labeled, and feature-relevant ingredients to deliver the optimum output.

For most businesses, this requires extensive mining and analysis of data lakes to both identify relevant data sources and prepare data for training and feature extraction—preliminary steps to deploying artificial neural networks for inferencing.

Data preparedness in DL is not only imperative in the early data preparation phase of AI development, but ongoing. Consider, for example, the need for ongoing management, monitoring, re-testing, and data governance when:

  • AI models continually learn, based on dynamic environments
  • AI models use new data inputs (new sources, data types, batch processing, etc.)
  • AI models (or their inputs or outputs) may be held to different regulatory compliance regimes
  • AI models potentially impacted by hacks, malicious actors, spoofed data, etc.
  • AI models measured differently, reflect different objectives (e.g. new product discovery vs. converted sales)

Measurement

Measuring AI initiatives should be viewed more dynamically than some traditional technology techniques. Adding learning or automation to a process is different than adding a widget to a process. That AI constantly learns means that ‘lift’ can be a moving goalpost, and optimization in one area, (e.g. risk reduction) can impact another area (e.g. how to shift from reactive risk analysis to proactive risk mitigation.)

Although small prototypes are useful for demonstrating [to decision-makers] ‘the art of the possible’ in concrete monetary impacts— e.g. cost reductions, lifts in conversion, pipeline optimization—companies we’ve interviewed on the topic reinforce the need begin by measuring more than ROI.

The first area consider metrics is in the accuracy of the model and its outputs. Is the image recognized actually a cat? Is the conversational agent forming grammatically accurate questions and responses? There is also debate in the community around if, how, and to what extent to measure replicability, interpretability, and other ways to test the ‘integrity’ of the model. These are critical areas to consider, but also rely on first understanding (and measuring) the outcomes themselves to determine impacts of the model.

The most important area to measure is impact on people. Start with metrics that assess value, often in the form of productivity or sentiment, to user or function. For example:

  • Time saved (for agents, customers, analysts, etc.)
  • # successful outputs (e.g. cases resolved, conversions, transactions, referrals)
  • Identification of fake, garbage, or fraudulent inputs (e.g. referrals, threats)
  • Improved sentiment (e.g. shopping, sales, support, invoicing process)

Businesses attempting to measure outcomes derived by deep learning should be prepared to shift metrics as programs evolve. This is important, not only because models, inputs, and outputs will evolve, but because deep learning may be applied as one technique supporting one initiative. It may be more strategic to measure broader impacts of DL on the organization (that is, taken in sum across business processes), not just vertically.

Although DL has become a pervasive analytics technique in the largest technology companies, most organizations remain in the very early days. Anecdotally, we estimate just one in five enterprises is using DL in any of their business processes today. Kaleido Insights advises the following:

  1. Invest in understanding the capabilities and limitations of deep learning, as well as how it fits in with other data analysis techniques, hardware and device capabilities.
  2. Take baby steps to discover [capabilities and limitations] first-hand: conduct experiments and pilot techniques, invest in workforce education and upskilling.
  3. Expect significant and rapid advancements in software, compute, storage, and networking capabilities, and dedicate resources to tracking and testing these.

 

Leave a Reply

Your email address will not be published. Required fields are marked *