Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


Artificial intelligence (AI) and machine learning (ML) are disrupting every industry. Their impacts and integrations will only continue to grow. 

And ultimately, the future of AI is in distributed computing, Ion Stoica, cofounder, executive chairman and president of Anyscale, told the audience this week at VenureBeat’s Transform 2022 conference

Distributed computing allows components of software systems to be shared among multiple computers and run as one system, thus improving efficiency and performance. 

But while distributed computing is necessary, writing distributed applications is hard, “it’s even harder than before,” Stoica said. 

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

In particular, distributed computing involved with AI and ML has many challenges – distributed systems vary widely in their difficulty of implementation, and engineers must test for all aspects of network and device failure, as well as for different permutations of failures and bugs. 

This has provided opportunity for companies like Anyscale, which offers a set of tools to enable developers to build, deploy and manage distributed apps. 

The company was founded by the creators of Ray, the distributed AI open-source framework that simplifies scaling of AI workloads to the cloud. Ray allows users to transform sequentially running Python code into a distributed application with minimal code changes. 

Serverless and cloud-agnostic

Anyscale’s option is serverless, cloud-agnostic and supports both stateless and stateful computations. The tool abstracts away servers and clusters and provides autoscaling. 

As Stoica noted, depending on the data set, the compute requirements to train state-of-the-art models continues to grow by orders of magnitude. For example, Google’s Pathways Language Model (PaLM) – a single model that generalizes across domains and tasks with high efficiency – has 530 billion parameters. And some of the largest carriers have over 1 trillion parameters. 

There is a huge gap between the demands of ML applications and the capabilities of a single processor or server. Similarly, Stoica pointed out that, when Apache Spark was developed and released in 2014, all machines were considered homogenous. But that assumption can no longer be made, as today’s landscape involves many different hardware accelerators.

“There is no other way to support these workloads without distribution – it’s as simple as that,” Stoica said. 

There are multiple stages to building an ML application, he said – such as labeling, data-specific training, tuning, reinforcement training. “Each of these stages you need to scale, each of them you typically have a different distributed system,” Stoica said. 

Building end-to-end pipelines requires stitching such systems together, then re-managing, re-developing – ultimately what he described as a long and laborious process. 

“Our mission is about making distributed computing easier, scaling these workloads easier,” he said of Anyscale and Ray. 

What inning is AI implementation in?

Because AI and ML are so far-reaching, use cases for Ray are “all over the place.” The tool has been used in the financial industry, retail, manufacturing – even in applications for America’s Cup when it comes to training crew members. 

Another example is use in game testing. “In online gaming, and online games, you don’t have enough humans in a particular room or area of the game to interact with,” Stoica said. 

Addressing the concept of “crawling, walking or running,” Stoica described AI as being essentially where big data was 10 years ago. It’s taking time to mature, he contended, because it’s not only about developing tools, but training experts. \

“It’s taking time because the time [needed] is not only for developing tools,” he said. “It’s training people. Training experts. That takes even more time. If you look at big data and what happened, eight years ago a lot of universities started to provide degrees in data science. And of course there are a lot of courses now, AI courses, but I think that you’ll see more and more applied AI and data courses, of which there aren’t many today.”

Just a half-dozen or so years ago, for instance, colleges and universities began to provide degrees in data science, he pointed out. Now, more AI courses are being offered, and more and more applied AI courses will emerge, he predicted. 

To use a baseball analogy, “we are really in the first inning,” Stoica said.

Learn more about how distributed AI is helping companies ramp up their business strategy and catch up on all Transform sessions by registering for a free virtual pass.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.