Check out the on-demand sessions from the Low-Code/No-Code Summit to learn how to successfully innovate and achieve efficiency by upskilling and scaling citizen developers. Watch now.
Founded in 2020, Arize AI aimes to provide ML observability. Its platform provides insight into common issues such as bias, data integrity and data drift — all of which can potentially lead to incorrect predictions. Data drift, in particular, is a major issue and may well have been the root cause of numerous high-profile ML failures in recent years, including one at Equifax.
The need for ML observability has also fueled demand for Arize AI’s technology, with the company raising $38 million in a series B round of funding earlier this month. Arize is looking to further accelerate its growth and reach, announcing the availability of its platform in the Google Cloud Marketplace today.
“Every single company is investing in AI and they need tools and infrastructure to put models into the real world,” Aparna Dhinakaran, cofounder and CPO at Arize, told VentureBeat. “Google has an amazing platform with Vertex AI and we are a very complementary solution as Arize AI focuses on observability.”
Intelligent Security Summit
Learn the critical role of AI & ML in cybersecurity and industry specific case studies on December 8. Register for your free pass today.
Arize and Google are hardly strangers
The availability on Google is not a huge leap for Arize, as the company is already running its technology on the Google Kubernetes Engine (GKE) platform. GKE is a managed service operated by Google for running Kubernetes, which is a widely deployed container-orchestration system.
“I knew early on that we were going to be all-in on Kubernetes for a variety of reasons, and I knew that I didn’t want to be in the business of operating a Kubernetes cluster,” Michael Schiff, founding engineer and chief architect at Arize, told VentureBeat. “I would say that GKE has been one of the main things that has allowed us to go from day zero to a series B without what you would call a traditional operations or infrastructure team.”
Dhinakaran added that with GKE as a base, Arize has been able to support its growing customer base, which has involved streaming billions of AI inference data points into the observability system. She noted that her company has large customers, like Instacart and Etsy, that require large scale, which Arize has been able to support, thanks in part to the infrastructure that GKE provides.
As to why Arize is only entering the Google Marketplace now, especially since the company itself already relies on Google infrastructure, the answer has to do with timing and demand. Arize launched its self-serve offering in March of this year. Dhinakaran said until now users have largely just gone to the Arize website and started using the company’s software-as-a-service (SaaS) offering, signing up directly via Arize.
Dhinakaran noted that in recent months, she has had an increasing number of users who were also using Google’s Vertex AI MLops platform and were asking for integrations. Vertex AI also runs on GKE. By making Arize available in the Google Marketplace, users can now more easily get started directly in Google, with a tighter integration into Vertex AI, as well as other Google Cloud services.
The intersection of Vertex AI, ML and infrastructure observability
For Drew Bradstock, director of Google Kubernetes Engine product management, having Arize in the Google Marketplace is a net benefit.
“Vertex actually runs on GKE and so does Arize for the exact same reason, which is the ability to run large-scale workloads with a very small IT staff,” Bradstock told VentureBeat.
Arize provides a deeper and different level of observability into how a workload is running than what a user might get just running GKE, or Vertex AI, on their own. Dhinakaran said that there are differences between being able to monitor and observe the infrastructure that is running ML, and being able to observe how ML models themselves are actually running.
ML observability is also different from the application performance management (APM) space as well.
Dhinakaran explained that when troubleshooting an ML model, it’s not just if the model is fast or slow due to infrastructure. She said that what’s often far more important is how a model was built, what data the model was trained on, how its parameters are configured and a host of other ML-specific concerns.
“We want to make it easy for ML engineers to be able to solve model problems when AI isn’t working in the real world,” she said.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.