Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.
More than ever, organizations are putting their confidence – and investment – into the potential of artificial intelligence (AI) and machine learning (ML).
According to the 2022 IBM Global AI Adoption Index, 35% of companies report using AI today in their business, while an additional 42% say they are exploring AI. Meanwhile, a McKinsey survey found that 56% of respondents reported they had adopted AI in at least one function in 2021, up from 50% in 2020.
But can investments in AI deliver true ROI that directly impacts a company’s bottom line?
According to Domino Data Lab’s recent REVelate survey, which surveyed attendees at New York City’s Rev3 conference in May, many respondents seem to think so. Nearly half, in fact, expect double-digit growth as a result of data science. And 4 in 5 respondents (79%) said that data science, ML and AI are critical to the overall future growth of their company, with 36% calling it the single most critical factor.
MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.
Implementing AI, of course, is no easy task. Other survey data shows another side of the confidence coin. For example, recent survey data by AI engineering firm CognitiveScale finds that, although execs know that data quality and deployment are critical success factors for successful app development to drive digital transformation, more than 76% aren’t sure how to get there in their target 12–18 month window. In addition, 32% of execs say that it has taken longer than expected to get an AI system into production.
AI must be accountable
ROI from AI is possible, but it must be accurately described and personified according to a business goal, Bob Picciano, CEO of Cognitive Scale, told VentureBeat.
“If the business goal is to get more long-range prediction and increased prediction accuracy with historical data, that’s where AI can come into play,” he said. “But AI has to be accountable to drive business effectiveness – it’s not sufficient to say a ML model was 98% accurate.”
Instead, the ROI could be, for example, that in order to improve call center effectiveness, AI-driven capabilities ensure that the average call handling time is reduced.
“That kind of ROI is what they talk about in the C-suite,” he explained. “They don’t talk about whether the model is accurate or robust or drifting.”
Shay Sabhikhi, cofounder and COO at Cognitive Scale, added that he’s not surprised by the fact that 76% of respondents reported having trouble scaling their AI efforts. “That’s exactly what we’re hearing from our enterprise clients,” he said. One problem is friction between data science teams and the rest of the organization, he explained, that doesn’t know what to do with the models that they develop.
“Those models may have potentially the best algorithms and precision recall, but sit on the shelf because they literally get thrown over to the development team that then has to scramble, trying to assemble the application together,” he said.
At this point, however, organizations have to be accountable for their investments in AI because AI is no longer a series of science experiments, Picciano pointed out. “We call it going from the lab to life,” he said. “I was at a chief data analytics officer conference and they all said, how do I scale? How do I industrialize AI?”
Is ROI the right metric for AI?
However, not everyone agrees that ROI is even the best way to measure whether AI drives value in the organization. According to Nicola Morini Bianzino, global chief technology officer, EY, thinking of artificial intelligence and the enterprise in terms of “use cases” that are then measured through ROI is the wrong way to go about AI.
“To me, AI is a set of techniques that will be deployed pretty much everywhere across the enterprise – there is not going to be an isolation of a use case with the associated ROI analysis,” he said.
Instead, he explained, organizations simply have to use AI – everywhere. “It’s almost like the cloud, where two or three years ago I had a lot of conversations with clients who asked, ‘What is the ROI? What’s the business case for me to move to the cloud?’ Now, post-pandemic, that conversation doesn’t happen anymore. Everybody just says, ‘I’ve got to do it.’”
Also, Bianzino pointed out, discussing AI and ROI depends on what you mean by “using AI.”
“Let’s say you are trying to apply some self-driving capabilities – that is, computer vision as a branch of AI,” he said. “Is that a business case? No, because you cannot implement self-driving without AI.” The same is true for a company like EY, which ingests massive amounts of data and provides advice to clients – which can’t be done without AI. “It’s something that you cannot isolate away from the process – it’s built into it,” he said.
In addition, AI, by definition, is not productive or efficient on day one. It takes time to get the data, train the models, evolve the models and scale up the models. “It’s not like one day you can say, I’m done with the AI and 100% of the value is right there – no, this is an ongoing capability that gets better in time,” he said. “There is not really an end in terms of value that can be generated.”
In a way, Bianzino said, AI is becoming part of the cost of doing business. “If you are in a business that involves data analysis, you cannot not have AI capabilities,” he explained. “Can you isolate the business case of these models? It is very difficult and I don’t think it’s necessary. To me, it’s almost like it’s a cost of the infrastructure to run your business.”
ROI of AI is hard to measure
Kjell Carlsson, head of data science strategy and evangelism at enterprise MLops provider Domino Data Lab, says that at the end of the day, what organizations want is a measure of the business impact of ROI – how much it contributed to the bottom line. But one problem is that this can be quite disconnected from how much work has gone into developing the model.
“So if you create a model which improves click-through conversion by a percentage point, you’ve just added several million dollars to the bottom line of the organization,” he said. “But you could also have created a good predictive maintenance model which helped give advance warning to a piece of machinery needing maintenance before it happens.” In that case, the dollar-value impact to the organization could be entirely different, “even though one of them might end up being a much harder problem,” he added.
Overall, organizations do need a “balanced scorecard” where they are tracking AI production. “Because if you’re not getting anything into production, then that’s probably a sign that you’ve got an issue,” he said. “On the other hand, if you are getting too much into production, that can also be a sign that there’s an issue.”
For example, the more models data science teams deploy, the more models they’re on the hook for managing and maintaining, he explained. “So [if] you deployed this many models in the last year, so you can’t actually undertake these other high-value ones that are coming your way,” he said.
But another issue in measuring the ROI of AI is that for a lot of data science projects, the outcome isn’t a model that goes into production. “If you want to do a quantitative win-loss analysis of deals in the last year, you might want to do a rigorous statistical investigation of that,” he said. “But there’s no model that would go into production, you’re using the AI for the insights you get along the way.”
Data science activities must be tracked
Still, organizations can’t measure the role of AI if data science activities aren’t tracked. “One of the problems right now is that so few data science activities are really being collected and analyzed,” said Carlsson. “If you ask folks, they say they don’t really know how the model is performing, or how many projects they have, or how many CodeCommits your data scientists have made within the last week.”
One reason for that is the very disconnected tools data scientists are required to use. “This is one of the reasons why Git has become all the more popular as a repository, a single source of truth for your data scientist in an organization,” he explained. MLops tools such as Domino Data Lab offer platforms that support these different tools. “The degree to which organizations can create these more centralized platforms … is important,” he said.
AI outcomes are top of mind
Wallaroo CEO and founder Vid Jain spent close to a decade in the high-frequency trading business in Merrill Lynch, where his role, he said, was to deploy ML at scale and do so with a positive ROI.
The challenge was not actually developing the data science, cleansing the data or building the trade repositories, now called data lakes. By far, the biggest challenge was taking those models, operationalizing them and delivering the business value, he said.
“Delivering the ROI turns out to be very hard – 90% of these AI initiatives don’t generate their ROI, or they don’t generate enough ROI to be worth the investment,” he said. “But this is top of mind for everybody. And the answer is not one thing.”
A fundamental issue is that many assume that operationalizing ML is not much different than operationalizing a standard kind of application, he explained, adding that there is a big difference, because AI is not static.
“It’s almost like tending a farm, because the data is living, the data changes and you’re not done,” he said. “It’s not like you build a recommendation algorithm and then people’s behavior of how they buy is frozen in time. People change how they buy. All of a sudden, your competitor has a promotion. They stop buying from you. They go to the competitor. You have to constantly tend to it.”
Ultimately, every organization needs to decide how they will align their culture to the end goal around implementing AI. “Then you really have to empower the people to drive this transformation, and then make the people that are critical to your existing lines of business feel like they’re going to get some value out of the AI,” he said.
Most companies are still early in that journey, he added. “I don’t think most companies are there yet, but I’ve certainly seen over the last six to nine months that there’s been a shift towards getting serious about the business outcome and the business value.”
ROI of AI remains elusive
But the question of how to measure the ROI of AI remains elusive for many organizations. “For some there are some basic things, like they can’t even get their models into production, or they can but they’re flying blind, or they are successful but now they want to scale,” Jain said. “But as far as the ROI, there is often no P&L associated with machine learning.”
Often, AI initiatives are part of a Center of Excellence and the ROI is grabbed by the business units, he explained, while in other cases it’s simply difficult to measure.
“The problem is, is the AI part of the business? Or is it a utility? If you’re a digital native, AI might be part of the fuel the business runs on,” he said. “But in a large organization that has legacy businesses or is pivoting, how to measure ROI is a fundamental question they have to wrestle with.”
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.