How operationalizing AI strategy moves you from strategy to action
Business Intelligence, Innovative Ideas, ProductivityPartner, Ernst & Young Australia and EY Asia-Pacific Artificial Intelligence and Analytics Advisory Leader
Posted on September 7, 2018
Organizations ready to embrace AI’s potential will want to invest as much thought into operationalizing their AI strategy as in the technologies that power it.
After all, there is no point in building great artificial intelligence (AI) solutions if no one uses them. Let’s face it; AI can pose unique challenges that would test any organization’s capacity for change and new thinking. In fact, some estimates suggest that, at most, one in five AI solutions become operational. And at an average cost of US$200,000 to develop an AI solution, that could represent $800, 000 wasted for every $1m invested. That’s an astounding magnitude of throw-away work that only few companies can afford.
So while the business case for an intelligent approach to AI strategy may be clear, the path to successfully operationalizing AI and minimizing the associated risks is often less evident.
Typically, there are three main obstacles that impede success in this sphere:
- The AI solution is not “smart” enough to perform adequately and accurately in real-life, real-time conditions;
- Operational constraints obstruct AI deployment;
- People within the organization do not embrace — or worse, actively oppose —the AI program for reasons other than performance or fit.
Identifying the potential barriers that can hinder AI deployment, and developing the appropriate strategies to overcome them is an essential component of successful AI strategy.
How good is “good enough”?
The investment case for integrated AI strategy is based on one foundational assumption: that AI can do things better, cheaper or faster than people. But establishing this as a fact is often easier said than done.
The level of intelligence of an AI solution is difficult to predict until it is well into its development cycle. Unlike other technology projects where solutions perform similarly in different contexts, the performance of the same AI solution will vary depending on the quality of the data that is fed into it, and the training that shapes its actions and behavior. Robust planning that, among other things, coherently defines which aspects of the business can most benefit from AI-related investments, is part of structuring the AI initiative for success.
In some applications, the AI solution may need to perform at the human level or better to avoid creating more work. Otherwise, the effort of investigating and fixing errors can quickly erode the efficiency savings from automation. For example, using AI to detect fraud improves the rate of detection of fraudulent transactions —but typically it increases the rate of false positives as well. Past a certain point, the investigations team may be flooded with so many cases that the AI “advantage” is erased.
If one cannot predict whether an AI solution will satisfy performance expectations, then early validation becomes essential. A good approach is to start with an estimate of the level of performance that could function as a target or benchmark. The growing availability of applied AI research may help in this regard. For example, Baidu Research suggests a ceiling of ~80 % accuracy in image recognition algorithms; that is, one can expect a “good” solution to classify images accurately the first time in four out of five cases.1 Consulting partners may also be a good source of performance benchmarks drawn from similar organizations.
Another tactic is to diversify AI opportunities. For example, in a recent engagement, EY helped an organization to segment more than 40 potential use cases into high, medium and low risk categories, assigning a funding allocation of 20:60:20. The high-risk segment allowed the organization to attract and retain top talent by providing challenging problems to work on, while the low-risk segment provided a set of “safe bets.”
A third approach is to think of the AI program as a set of option contracts.2 For example, most AI programs involve an investment in improving data quality. By defining and realizing the benefits of these improvements by themselves, the organization protects the downside, and is in a better position to decide whether to “call” the options (i.e., continue developing AI solutions) once the performance has been validated.
From lab to field: proof-of-concept validation helps prevent costly errors
Sadly, more than a few organizations discover —the hard way —that their AI solutions do not actually fit into their operational environments. While one might expect these investments to be driven by the desire to address specific operational goals in the first place, surprisingly often AI projects do not start this way.
“Committing too much, too soon, to ideas that have not yet been validated is at the root of many AI failures. A rigorous proof-of-concept process can reveal gaps in the operational assumptions that make or break these initiatives.”
The fact is, shiny new IT projects are sometimes driven by the “art of the possible” (let’s do it because we can!) rather than a strategy towards achieving a defined goal and purpose (let’s do it because we should!). Committing too much, too soon, to ideas that have not yet been validated does no one any favors and is at the root of many of today’s AI failures.
A Proof-of-Concept (PoC) can help validate key operational assumptions of the project without adding excessive expense or delay. The PoC should be representative of the operational context of the AI solution, but be mindful of the PoC does not become a Proof-of-Technology (PoT), which evaluates technical, but not operational, assumptions.
For example, a resources company successfully tested more than 500 algorithms to predict equipment failures days or weeks before they happened. But six months into the program, just as they were set to operationalize the algorithms, they encountered an unexpected problem: their maintenance managers had been asked to adhere to schedules that were locked in six months in advance. The PoT had validated the tech but the PoC failed to test and validate the operational context in which it would operate. It was a costly oversight!
Tackling common barriers to AI adoption
Performance and fit aside, AI solutions encounter resistance across the organization when trying to implement them. People may not understand them because the technology is so new. AI solutions require people in the business to invest time to train them adequately. And AI solutions often involve organizational redesign. These strategies can help overcome these common barriers to AI adoption:
- Align the AI program to the organization’s purpose; AI strategy is not necessarily a zero-sum game for employees. In fact, when its connection to purpose is clearly articulated, many employees are positive about the potential of new technology to empower them to work in smarter ways.3
- Show before you tell; organizations can cultivate understanding and clarity around AI strategies and solutions through demonstration. For example, consulting partners can showcase previous solutions that have been successful, while collaboration and information sharing can help organizations share and learn from experiences.
- Aim to reduce the training effort; resist overtraining. Some AI solutions accommodate “batch” training that uses pre-existing data, while pre-trained algorithms may be an efficient route for others.
Can embracing uncertainty help unlock success?
Among the organizations that have already entered the AI race, one factor appears to differentiate the winners from the losers: the ability to embed new ways of working under uncertainty. Three key steps help cultivate this ability:
- Be realistic about the performance ceiling of AI technology
- Test the key operational assumptions before fully committing to opportunities
- Pivot from a supply-driven to a demand-driven discussion as early in the process as possible
As organizations work to define their AI strategy, structure their AI program and embed it into their businesses, they will want to ensure they invest time and effort into developing a realistic plan to operationalize the AI solutions they select. Working from the start to anticipate potential barriers to successful deployment and benchmarking realistic outcomes will help these organizations reap the benefits of an intelligent approach to AI.
Partner, Ernst & Young Australia and EY Asia-Pacific Artificial Intelligence and Analytics Advisory Leader