We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!
More than three-quarters of companies say that they have AI models that never come into use. For 20% of companies, the numbers look even worse, with only 10% of their models making it into production.
That’s according to a new survey commissioned by Run:AI, which found that infrastructure challenges are causing resources to sit idle at companies investing in AI. “[I]f most AI models never make it into production, the promise of AI is not being realized,” Run:AI CEO Omri Geller said in a statement. “Our survey revealed that … data scientists are requesting manual access to GPUs, and the journey to the cloud is ongoing.”
The research conducted by Global Surveys canvassed more than 200 scientists, AI and IT practitioners, and system architects across companies with over 5,000 employees. Just 17% of respondents said that they were able to achieve “high utilization” of their hardware resources, while 22% admitted that their infrastructure sits idle for the most part. That’s despite significant investment — 38% of respondents pegged their company’s annual budget for hardware, software, and cloud fees at more than $1 million. For 15%, their companies spend more than $10 million.
Many challenges stand in the way of successfully embedding AI throughout an organization. In an Alation whitepaper, a clear majority of employees (87%) cited data quality shortcomings as the reason their organizations failed to embrace the technology. Another report from MIT Technology Review Insights and Databricks found that AI’s business impact is limited by issues in managing its end-to-end lifecycle.
The end result is abysmal adoption rates. According to a 2019 IDC study, only 25% of the organizations already using AI have developed an “enterprise-wide” strategy. A recent Juniper Networks survey is less optimistic, with only 6% of respondents reporting adoption of AI-powered solutions across their business.
In its research, Run:AI identified data inconsistencies as the biggest deployment blocker. Results state 61% of respondents said that data collection, data cleansing, and governance caused deployment problems. Forty-two percent of experts responding to the survey highlighted challenges with their companies’ AI infrastructure and compute capacity. More than a third say they had to manually request access to resources in order to complete their work.
Data scientists spend the bulk of their time cleaning and organizing data, according to a 2016 survey by CrowdFlower. And respondents to Alation’s latest quarterly State of Data Culture Report said that inherent biases in the data being used in their AI systems produce discriminatory results that create compliance risks for their organizations.
The business value of any AI solution is likely to be limited without clean, centralized data pools or a strategy for actively managing them, Broadridge’s VP of innovation and growth, Neha Singh, noted in a recent piece. “McKinsey estimates that companies may be squandering as much as 70% of their data-cleansing efforts,” she wrote. “The key is prioritizing these efforts based on what’s most critical to implement the most valuable use cases.”
Despite the hurdles, Run:AI reports that companies still commit to AI. These put millions toward infrastructure and likely millions more toward trained staff. Seventy-four percent of survey respondents said that their employers were planning to increase hardware capacity or infrastructure spend in the near future.
“Companies that handle these challenges the most effectively will bring models to market and win the AI race,” Geller continued.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.