Presented by KPMG
COVID-19 left business leaders no choice. Across industries, organizations went all in on digital transformation. And along with the fundamental changes these companies made to their operating models, came an explosion in AI adoption. Business leaders seized the opportunity to level up their technology, and were forced to take a bold leap forward in their AI capabilities, says Dr. Sreekar Krishna, national leader for AI and head of data engineering at KPMG.
“Some leaders saw that AI was on their road map perhaps two or three years down the line and made the important and necessary decision to accelerate its adoption now,” Krishna says. “The digital hurdles that they initially faced to AI adoption are now being removed very fast.”
The recent survey from KPMG, “Thriving in an AI World: Unlocking the Value of AI across 7 Industries,” bears that out. Seventy-nine percent of executives say that they have a moderately functional AI strategy while 43% say it’s fully functional at scale. In the industrial manufacturing sector, 93% of executives say AI is at least moderately functional, in the financial services industry it’s 84%, and in technology, it’s 83%.
Driving the adoption of AI is the fact that many institutions, especially financial services, retail, and the like, have been experimenting with AI technologies for the past few years, Krishna says. Now, with the dire need to productionalize their technologies, they are focusing on the biggest questions: Where can AI deliver value, and where is it getting ahead of us?
Yet some leaders are worrying that acceleration is happening too quickly. The KPMG survey, which looks at how business leaders across seven industries (technology, financial services, industrial manufacturing, healthcare, life sciences, retail, and government) perceive AI’s value, as well as AI-related pain points, risks, and challenges, found that despite the positive perception of AI, more than one-third (44%) of the executives surveyed believe that their industry is moving too fast with AI technologies. Why the contradiction?
“We think the belief some of our survey respondents have, that AI is moving too fast, can be bucketed into four categories,” Krishna says — the extraordinary pace of growth in AI capabilities, the ‘black box,’ lack of skills internally and externally, and overall digital readiness of the organization.
Pace of AI’s growth
First and foremost, leaders have seen what AI is capable of doing, Krishna says — use cases that demonstrate the power of the AI technologies to accomplish things that had previously only been seen in science fiction. This rapid pace of growth means that institutions are not able to keep up with their internal processes and skills to handle the societal demands, especially if something goes wrong with their AI-backed offerings.
He points to very complex AI technologies like the GPT-3 model, which produces remarkably human-like text in a range of different styles and has been shown to develop everything from self-help blogs to short stories, songs, technical manuals, and far more. However, these technologies are already out there in the open-source world. And as Krishna points out, they could soon be embedded into ubiquitous tool chains in a matter of years, if not months — and the ramifications have not yet been considered.
Much has been written (by humans) about the risk of the technology being used by bad actors — whether that’s in the service of fake news, or phishing scams. Krishna also cited the recent deep fake concerns about AI image and video manipulation. These concerns stem from the fact that it is becoming harder and harder to distinguish what was produced by a real human, to what was generated by a machine.
AI-backed business tools, which are becoming incredibly common across enterprises, are finding their way into critical decision pipelines. This means black box algorithms could now become at least partially responsible for handing down sensitive verdicts, from loan and credit decisions to physical security solutions, and more. These decisions, based on potentially biased data and assumptions, can profoundly impact businesses and lives.
Black box approach to algorithmic know-how
Those black box algorithms are already giving decision makers headaches, if not nightmares. As AI becomes more ubiquitous in the tools their organizations use, concern is growing about lack of explainability.
A decade back, as these algorithms were being developed and evolved, the decisioning use cases for them didn’t have serious consequences. For example, Krishna explains, “What’s really the worst thing that can happen on Netflix if they give you a bad recommendation? You watch a bad movie. On Amazon, it may recommend a bad product. The decision is still with the human, with the user. That’s the origin of a lot of these AI algorithms.”
Now many of the CXOs Krishna interacts with regularly are asking questions. If an AI algorithm provides a negative score for a user, say at a financial institution, or an insurance company, they may not know how the algorithm arrived at that conclusion. “That’s the fear,” says Krishna. “As long as an algorithm is not able to explain itself, it’s hard for decision-makers to say, we’ll make a decision based on that number. And this is something I constantly talk to my clients about. In fact, this is one of the driving factors for many of our clients asking for some form of regulatory need to help explain what’s going on under the hood.”
They want to know they can control outcome from the AI — if it’s controllable at all. If it’s embedded into the tool, how much compliance does the tool maker need to give the decision makers so that they can understand what that AI is doing, and be able to take responsibility for its decisions? Tomorrow, if one of their customers has a material impact, will they have to explain that it’s not because of a company policy, but because this tool that made a cryptic ruling?
“I believe all of these are leading to that perception that our survey respondents shared where AI seems to be moving very fast,” says Krishna.
Lack of specialized skills both internally within the organization and externally in the market
C-suite and business leaders are realizing very fast that they lack the skilled resources that are necessary to leverage, productionalize and control the rapidly evolving AI technologies. Some of it comes from institutions not budgeting the necessary resources to reskill and upskill their own internal talent. There is also a general AI and Digital talent shortage in the market, including AI developers and engineers, AI researchers and data scientists.
Further, most AI skills do not yet fit nicely into existing enterprise’s business operating models. “Too often my clients complain how they keep losing their AI talent within 6 months of hiring,” says Krishna. This is driven a lot by the fact that these skills are in high demand in the market, and if the institution hiring the talent has not changed their operating model to adopt the new digital and data skills, it will be hard to retain this talent pool.
“Preparing the organization for the new talent first starts by upskilling the existing business talent to understand and appreciate the need for AI,” adds Krishna. “Too many times, organizations find themselves in situations where the newly hired talent does not understand the business need, while the existing business talent does not understand the need for a data scientist amongst them.”
Worry about keeping up
The growth of AI technologies has certainly been driven by the digital transformation trend of the past decade. Unfortunately, not enough institutions have spent the time and resources to accelerate towards a digital future. From business processes, to tool integrations, to work force shaping, organizations don’t feel ready for the digital future, especially one that is powered by AI. The C-suite execs and business leaders alike feel like they haven’t invested nearly enough in digital readiness. They increasingly feel as if their institution is falling behind, while the pace of AI innovation continues to accelerate. The fear is that they may not be able to catch up, finding that scaling even their smaller projects across the organization is far more challenging than they expected — especially if they haven’t yet set the groundwork for AI, from data mastery to cultural transformation.
The chasm between traditional organizations, which may have spent their money doing business as usual, versus a startup where the VC money has gone exclusively into digital and AI initiatives — from the technology, to the data preparation, to attracting and training a workforce with the right skillset — is particularly wide.
“A traditional financial organization compared to a fintech today — from a resource perspective, from a skill perspective, from a process perspective — they are literally worlds apart,” Krishna says. “Leaders know that. That’s another reason they feel like AI is moving too fast for them to keep pace with.”
The real-world promise of AI
But despite those concerns, artificial intelligence is gaining a foothold in organizations of every size, as its promise becomes real every day, and the path to realizing AI’s promise becomes more clear.
The survey found that nearly all executives surveyed are confident AI can help tackle their industry’s most pressing issues — such as detecting fraud (93%), and in their own organizations it’s adding even more value than was anticipated (60%). Most organizations are prioritizing AI education and skill set building for their employees, which is a fundamental need — if not the most foundational — for developing a successful AI strategy.
“I’ve been talking to a lot of industry leaders today about digital training, digital education, AI education at different levels,” says Krishna. “You don’t need to teach linear algebra to everyone, but you can teach them how these systems work and interface.”
It’s an issue so vital, as AI becomes embedded across businesses and lifestyles, that it needs to be tackled at the society level with increased investment in STEM education. “We don’t have the work force, the necessary skills,” explains Krishna.
He emphasizes the importance of early education in programming and logic-based design, in preparing our next generation to understand the complex technologies growing up around them. It’s this investment that will truly realize the promise of AI.
Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. Content produced by our editorial team is never influenced by advertisers or sponsors in any way. For more information, contact email@example.com.