Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More
Welcome to the AI Beat – my new column digging into some of the week’s artificial intelligence (AI) news and attempting to put it into context.
When I asked Gartner analyst Sid Nag yesterday why this week’s AI-related cloud news released at Google Cloud Next and Microsoft Ignite seemed like a “tsunami” of product announcements, he chuckled.
While neither of us had our cameras on for the interview, I could feel Nag, who focuses on cloud services and technologies, nodding sagely. “I think I know why that is,” he said.
I certainly needed some expert guidance. After all, I’ve been on the AI beat at VentureBeat for six months now, but I have yet to fully adjust to the seasonal PR routines of Big Tech – in fact, this week’s barrage was of the sort that always reminds me of this comic from The Oatmeal.
Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.
There were announcements at Google Cloud Next ranging from its new Vertex AI Vision service, a computer vision-as-a-service capability, to a new AI Agents service and the open-source OpenXLA project. There were also additional incremental AI improvements announced at Next, including support for the Nvidia Merlin recommender system framework, AlphaFold batch inference and TabNet support.
Next came a boatload of Azure AI news out of Microsoft Ignite bucketed around productivity, including prebuilt AI models available in Form Recognizer, expanded summarization and language support in Azure Cognitive Services, new features in speech-to-text and text-to-speech, and the debut of DALL-E 2 as part of Azure’s OpenAI Service, by invitation only.
Using AI to make cloud an intelligent platform
The Big Tech cloud providers, Nag explained – Google, Microsoft and AWS are the top three, IBM and Oracle round out the top five – have gone past the era of commoditized infrastructure capabilities, such as compute storage and networking, with a new goal of making the entire infrastructure more intelligent and predictable. The new trend, he said, is applying AI to those infrastructure needs.
“I think the pedestrian use of AI has become more pervasive, which is something I’ve been pushing for the past three or four years with the hyperscalers,” he said.
Big Tech, he explained, wants to use AI effectively up and down the cloud stack to make the cloud an intelligent platform, as well as applying a layer that democratizes AI and brings it to the business user.
Lots of ingredients, but no clear recipe
The problem with so many cloud announcements about AI-related capabilities, however, is that Big Tech isn’t properly communicating how to combine those capabilities intelligently, Nag pointed out.
“If I want to cook chicken parmesan,” he explained, “which ingredients do I need and what proportion of those ingredients? What’s the recipe I should be using? I think there’s a lot of things being announced by cloud providers, but they should really up-level the game as to how you can use those things intelligently to drive outcomes.”
Fear of falling behind in a multicloud era
The Big Tech AI announcements coming so close together – Oracle is on deck for next week at Cloud World, teasing AI news with Nvidia – is no accident, Nag said.
“Nobody wants to fall behind,” he said. “Historically, Google has been ahead of the game in the AI business; it’s almost like Microsoft and others are feeling the heat.”
In an era of multicloud – where an organization uses a combination of clouds to distribute various applications and services – the battle for customers looking to employ sophisticated AI systems has become fiercer than ever.
“[Multicloud] is the biggest use case we see today,” said Nag. Historically, workloads and applications moving to the cloud were the low-hanging fruit of things like email applications, Office 365, CRM or ERP. Now, however, organizations are moving heavy-duty, complex applications to the cloud.
“They’re moving the core crown jewels,” he explained. “For example, if I’m a hedge fund in downtown Manhattan and I have a specialized application written to do algorithmic trading, that’s what I depend on to layer up on my competition because executing trades in nanoseconds matters.”
A cloud AI war for best-in-class
As that starts to happen, CIOs and IT leaders look for best-in-class components to service that workload – so in addition to a primary cloud vendor, an organization might bring on a secondary or tertiary provider.
“It’s not that the primary vendor doesn’t have the other componentry, but they may or may not be best in class,” he said, adding that once the secondary vendor gets a foot in the company’s door, it might look to expand its services.
With companies worldwide expected to spend an estimated $494.7 billion on cloud computing this year, according to Gartner, up 20% from 2021, and spending predicted to reach $600 billion by the end of 2023, there’s a lot of potential money at stake. And Nag told the Wall Street Journal [subscription required] in April that it’s the “willingness of CIOs to purchase higher-valued features that is fueling public cloud spending growth.”
Rohit Kulkarni, an analyst at MKM, weighed in on this topic in a research note last week [subscription required]. He said he is “convinced that we are on the cusp of accelerating Big Tech AI wars,” though it is unclear who holds the early edge when it comes to commercial success.
But he said that he is certain about one thing: “Commercial rollout of AI apps requires significant storage and computing resources, thus public cloud vendors could benefit from a strong usage tailwind over the next several years.”
Hmm … I wonder why I’m suddenly craving chicken parmesan?
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.