We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!
Artificial intelligence (AI), machine learning (ML) and other emerging technologies have potential to solve complex problems for organizations. Yet despite increased adoption over the past two years, only a small percentage of companies feel they are gaining significant value from their AI initiatives. Where are their efforts going wrong? Simple missteps can derail any AI initiative, but there are ways to avoid these missteps and achieve success.
Following are four mistakes that can lead to a failed AI implementation and what you should do to avoid or resolve these issues for a successful AI rollout.
Don’t solve the wrong problem
When determining where to apply AI to solve problems, look at the situation through the right lens and engage both sides of your organization in design thinking sessions, as neither business nor IT have all the answers. Business leaders know which levers can be pulled to achieve a competitive advantage, while technology leaders know how to use technology to achieve those objectives. Design thinking can help create a complete picture of the problem, requirements and desired outcome, and can prioritize which changes will have the biggest operational and financial impact.
One consumer product retail company with a 36-hour invoice processing schedule recently experienced this issue when it requested help speeding up its process. A proof of concept revealed that applying an AI/ML solution could decrease processing time to 30 minutes, a 720% speed increase. On paper the improvement looked great. But the company’s weekly settlement process meant the improved processing time didn’t matter. The solution never moved into production.
When looking at the problem to be solved, it’s important to relate it back to one of three critical bottom-line business drivers: increasing revenue, increasing profitability, or reducing risk. Saving time doesn’t necessarily translate to increased revenue or reduced cost. What business impact will the change bring?
Data quality is critical to success
Data can have a make-or-break impact on AI programs. Clean, dependable, accessible data is critical to achieving accurate results. The algorithm may be good and the model effective, but if the data is poor quality or not easy and feasible to collect, there will be no clear answer. Organizations must determine what data they need to collect, whether they can actually collect it, how difficult or costly it will be to collect, and if it will provide the information needed.
A financial institution wanted to use AI/ML to automate loan processing, but missing data elements in source records were creating a high error rate, causing the solution to fail. A second ML model was created to review each record. Those that met the required confidence interval were moved forward in the automated process; those that did not were pulled for human intervention to solve data-quality problems. This multistage process greatly reduced the human interaction required and enabled the institution to achieve an 85% increase in efficiency. Without the additional ML model to address data quality, the automation solution never would have enabled the organization to achieve meaningful results.
In-house or third-party? Each has its own challenges
Each type of AI solution brings its own challenges. Solutions built in-house provide more control because you are developing the algorithm, cleaning the data, and testing and validating the model. But building your own AI solution is complicated, and unless you’re using open source, you’ll face costs around licensing the tools being used and costs associated with upfront solution development and maintenance.
Third-party solutions bring their own challenges, including:
- No access to the model or how it works
- Inability to know if the model is doing what it’s supposed to do
- No access to the data if the solution is SaaS based
- Inability to do regression testing or know false acceptance or error rates.
In highly regulated industries, these issues become more challenging since regulators will be asking questions on these topics.
A financial services company was looking to validate a SaaS solution that used AI to identify suspicious activity. The company had no access to the underlying model or the data and no details on how the model determined what activity was suspicious. How could the company perform due diligence and verify the tool was effective?
In this instance, the company found its only option was to perform simulations of suspicious or nefarious activity it was trying to detect. Even this method of validation had challenges, such as ensuring the testing would not have a negative impact, create denial-of-service conditions, or impact service availability. The company decided to run simulations in a test environment to minimize risk of production impact. If companies choose to leverage this validation method, they should review service agreements to verify they have authority to conduct this type of testing and should consider the need to obtain permission from other potentially impacted third parties.
Invite all of the right people to the party
When considering developing an AI solution, it’s important to include all relevant decision makers upfront, including business stakeholders, IT, compliance, and internal audit. This ensures all critical information on requirements is gathered before planning and work begins.
A hospitality company wanted to automate its process for responding to data subject access requests (DSARs) as required by the General Data Protection Regulation (GDPR), Europe’s strict data-protection law. A DSAR requires organizations to provide, on request, a copy of any personal data the company is holding for the requestor and the purpose for which it is being used. The company engaged an outside provider to develop an AI solution to automate DSAR process elements but did not involve IT in the process. The resulting requirements definition failed to align with the company’s supported technology solutions. While the proof of concept verified the solution would result in more than a 200% increase in speed and efficiency, the solution did not move to production because IT was concerned that the long-term cost of maintaining this new solution would exceed the savings.
In a similar example, a financial services organization didn’t involve its compliance team in developing requirements definitions. The AI solution being developed did not meet the organization’s compliance standards, the provability process hadn’t been documented, and the solution wasn’t using the same identity and access management (IAM) standards the company required. Compliance blocked the solution when it was only partially through the proof-of-concept stage.
It’s important that all relevant voices are at the table early when developing or implementing an AI/ML solution. This will ensure the requirements definition is correct and complete and that the solution meets required standards as well as achieves the desired business objectives.
When considering AI or other emerging technologies, organizations need to take the right actions early in the process to ensure success. Above all, they must make sure that 1) the solution they are pursuing meets one of the three key objectives — increasing revenue, improving profitability, or reducing risk, 2) they have processes in place to get the necessary data, 3) their build vs. buy decision is well-founded, and 4) they have all of the right stakeholders involved early on.
Scott Laliberte is Managing Director and Global Leader of the Emerging Technology Group at Protiviti.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.