Presented by Outshift by Cisco
Enterprises are on a relentless sprint to seize the boundless power of generative AI. Of course, the hype cycle is always in play, especially around AI, but Gartner tells us that in just two years, 50% of board of directors of the world’s 500 largest companies will be turning to gen AI for everything from ideation to scenario planning and decision optimization. It’s certainly fear of missing out that has companies jumping on the AI bandwagon, says Vijoy Pandey, SVP, Outshift by Cisco, but there’s more. Meanwhile, organizations may feel challenged by the promise and perils of scaling generative AI.
“It’s one of the most important tech transitions in the past decade or more,” Pandey explains. “It’s a game changer for creativity, productivity, product innovation and new lines of revenue. It will not only redefine what each of those looks like in the future, it’s also unlocking tremendous new potential use cases.”
Scaling complexity: The challenges ahead
There are a number of challenges when it comes to AI -- the cost, ensuring accuracy, maintaining data and personally identifiable information (PII) security, eliminating bias and more. But the biggest challenge that companies have right now, at the point when they’re ready to launch a gen AI initiative, is the difficulty of managing scale, especially in light of the rapidly evolving complexity of the space.
“The evolution in this space is so rapid, so diverse, and so multi-pronged that it’s hard for anybody to keep up, and to manage the complexity."
“The evolution in this space is so rapid, so diverse, and so multi-pronged that it’s hard for anybody to keep up, and to manage the complexity,” Pandey explains. “Second, the lack of skill set is a barrier to successful adoption, especially when it comes to transformative large language models. It’s a relatively new technology to most adopters, and has particular challenges that takes particular expertise to unravel, which can make it near-impossible to easily deploy and update."
Scaling out while embroiled in all this complexity is a difficult undertaking. Small proof-of-concept trials can be deployed relatively easily -- Pandey points to the number of gen AI startups that have sprung up suddenly in the space.
“Right now, a whole slew of these companies are just producing wrappers, purpose-built user interfaces for open-source models, such as Llama, OpenAI’s API, Stable Diffusion and other players such as Anthropic or Cohere,” he says. “There’s no differentiation there, they’re not adding real value. Those companies will disappear entirely.”
The value proposition must be at the heart of a gen AI project, and that boils down to a productivity outcome, a creativity outcome or unlocking new businesses or streams of revenue.
“You have to figure out what the outcome is, and to really measure those outcomes, you cannot do that at a small scale,” he says. “What’s happening right now, folks and organizations are going through proof of concepts, things that are easy, like those wrappers, and see interesting outcomes that have decision-makers saying, ‘let’s just jump.’”
Unfortunately, actually rolling out a real gen AI product or initiative at scale, and with accuracy, is a complicated undertaking.
Challenges in the pipeline: data, privacy, cost management and more
“When you start deploying this at scale, you realize that everything, the entire pipeline, is a challenge,” Pandey says. “It starts with the data, the grounding truth in all of this. There might be a few lucky organizations out there that have a data lake, but 99.9 percent of organizations out there are more likely to have a series of data puddles.”
From acquisitions to green field deployments and new product introductions, data exists all over the organization in disparate formats, labeled differently and with their own complexities in how they talk to the tech stack, their identity, access controls and more. Pooling those data puddles into a larger data lake is the first major challenge. The second major issue is the quality of the data being fed into large language models.
“It starts with the data, the grounding truth in all of this. There might be a few lucky organizations out there that have a data lake, but 99.9 percent of organizations out there are more likely to have a series of data puddles.”
“Just because you have data doesn’t mean that it can be used to drive an outcome,” Pandey explains. “Data is often dirty, usually because of its provenance. It might not be usable because of issues around security or responsible AI, it might be locked away, it might simply be unworkable to achieve the goal.”
Even if you’ve cleaned your data carefully, using data responsibly also requires establishing guardrails so as not to accidentally step on your customers’ data, or leverage data that doesn’t belong to your organization. Privacy becomes a tricky issue, as data flows back and forth between the model and the user. These models learn with every question or request, including any sensitive or confidential data you feed to it. Enterprise gen AI requires a responsible AI approach aimed at protecting privacy, eliminating bias and ensuring explainability for customers.
And then when choosing a model, the complexity of the space creeps in again. There is a proliferation of model options, and right out of the gate that can slow down decision-making, from which do you choose, whether you should customize them, how do you customize them, how do you evaluate and tune them, and so on.
“That entire pipeline of choosing a model, customizing it, fine-tuning it and re-tuning it quickly, and doing it constantly -- because, of course, it has to be an iterative process -- is crucial to get right in AI, but harder to do in the gen AI space,” Pandey says. “The world around you and boundary conditions are changing constantly. How do you keep up quickly enough and keep iterating, and still drive a measurable amount of improvement?”
Through all this, the skills needed to do this, from the data science layer to the application science layer, the MLOps layer, and so on, are still difficult to source, because it’s a relatively new technology to most adopters, and has particular challenges that take particular expertise to unravel, which can make it incredibly difficult to deploy, update and manage.
Every request and user prompt has a price, which adds up extraordinarily quickly, and the enormous amount of compute power necessary to run a generative AI solution means shelling out for costly hardware.
Finally, generative AI is also expensive, not only to get up and running, but once it’s in play. Every request and user prompt has a price, which adds up extraordinarily quickly, and the enormous amount of compute power necessary to run a generative AI solution means shelling out for costly hardware. (In the meantime, Open AI reportedly lost $540 million dollars last year, due mainly to computing costs -- suggesting that for newer players, it may be worthwhile understanding the true startup cost for embarking on a gen AI product or service.)
The key to leveraging gen AI
The answer to the complexity of enterprise gen AI is relatively simple, Pandey says: narrowing down what differentiates your company, because your proprietary data and your in-depth subject matter expertise is what will allow you to excel.
“It’s thinking about where you can add value as an organization or as an individual,” Pandey says. “You need to step back and figure out not only your deep tech differentiation, but also where to jump in, at what level of preparedness, before you take the leap.”
It’s also essential to understand that you can’t leverage gen AI with only people and processes in place -- you need to tackle your project in a software-centric way. This is a particularly crucial strategy for managing the complexity, he explains. One of the biggest challenges is that it’s a multi-model world --you’ll need to work with hundreds, to achieve the lofty promise of generative AI. First, you’ll need to accept defeat, and then decide how to manage this reality.
It's about narrowing your focus by picking and choosing the use cases that matter the most to you, and then excelling in that space.
And that means building or working with someone who can help you build an abstraction layer, or a software framework that simplifies the interaction with multiple providers. Some abstraction layers can help standardize prompts, but it goes beyond APIs. Abstraction layers for user personas, the people who are actually using the product or tool, are essential. You also need a variety of software frameworks to measure and eliminate bias and improve fairness, help manage data and model security as well as PII information, help you iterate quickly and get feedback on your KPIs, and more.
And finally, it’s about narrowing your focus by picking and choosing the use cases that matter the most to you, and then excelling in that space.
“That, to me, is what success will look like. Usually, the tendency in these new areas is, well, it’s a new and exciting area, let me insert myself into all the new possibilities,” he says. “Let’s not lick all of the cupcakes. Just pick the cupcake you need, decide what your particular value-add is in this ecosystem, and let other companies innovate around you, for you.”
Looking to the future
The global market has been irrevocably changed by AI, and particularly generative AI, and that understanding must be a company’s driving force.
“All digital transformation moving forward should be AI-focused, and AI-first,” Pandey says. “If you’re not doing that, then you’re getting left behind. And because the complexity and the pain that you have to go through to get all of this going, you’d better choose use cases where you get a 100x return, otherwise, it’s just not worth the journey.”
And with the cost you’re going to sink into this, make sure your KPIs are crystal clear, because otherwise it’s going to be a tough road ahead, he says.
Start measuring accuracy. Start measuring process time. Start measuring cost. Start measuring security.
“Start measuring accuracy. Start measuring process time. Start measuring cost. Start measuring security. Start measuring the number of organizations that need to talk to each other and the time spent in doing things,” he explains. “Drive those down aggressively.”
As the technology evolves over the next few years, it will get simpler, and more cost-effective, more accurate, more trustworthy, more accessible and inclusive, he adds.
“We’re all going to solve for these problem statements. Just know that it is going to happen,” Pandey. “Know that this is also a step function change. It’s going to change all of us, and I believe humanity at large, going forward, in how we approach productivity and creativity and innovation. Jump in, use it. You don’t want to be left behind.”
Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. For more information, contact sales@venturebeat.com.
