Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More
San Francisco-based New Relic, a company that offers a cloud-based observability platform to help enterprises visualize, analyze, and optimize their entire software stack, has announced a solution to monitor the performance and accuracy of machine learning models in real-time.
In today’s data-driven landscape, organizations are heavily leaning towards AI and machine learning applications to improve business resilience and gain a competitive advantage. A recent survey conducted by IBM revealed that almost one-third of businesses are now using artificial intelligence, and as many as 43% have accelerated the rollout of AI as a result of COVID-19.
However, as the adoption continues to increase, the gap between data science teams developing ML models and DevOps teams operating those models is also increasing. The reason? Most engineers build and train models in siloed environments, resulting in reduced collaboration to monitor and govern the models in production. Such situations mean teams could fail to notice models that might be becoming irrelevant over time, particularly models based on static data, and consequently lose out on millions.
New Relic integrates model performance monitoring
To prevent this, New Relic is extending the capabilities of its flagship observability platform — New Relic One. The company said on Wednesday that the solution can now be enhanced with model performance monitoring integrations, providing data science and DevOps teams a single place to monitor and visualize model performance telemetry data, including critical signals such as recall, precision, and accuracy.
Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.
The platform, as New Relic’s General Manager for AIOps Guy Fighel explained in a blog post, is getting support to integrate popular MLOps frameworks such as AWS SageMaker, DataRobot (Algorithmia), Aporia, Superwise, Comet, Dagshub, Mona, and TruEra. Each of these would appear within New Relic Instant Observability (I/O) — an open-source ecosystem of quickstarts, integrations, and resources in New Relic One — and could be integrated within minutes, complete with custom performance dashboards and other observability building blocks.
This will ultimately allow companies to monitor their ML models and interdependencies with the rest of the application components and make necessary changes to ensure that the algorithms remain relevant in the long run — for maximum business impact.
New Relic also notes that data science and DevOps teams can use the offering to enable predictive alerts for unusual model-related changes in advance. This way, once the issue is detected, they could collaborate in the production environment to contextualize the situation and take decisions to address the problem.
“We are committed to making observability a daily best practice for every engineer, and with the launch of New Relic Model Performance Monitoring, we deliver the only unified data observability platform that gives Data Science and DevOps teams unprecedented visibility into the performance of their machine-learning-based applications,” Fighel said.
The development comes as the latest step from New Relic to strengthen its footprint in the enterprise observability space and take on players like Dynatrace and DataDog. Back in February, the company had added a visualization tool called Explorer to make it simpler for IT professionals to discover the root cause of issues.
Globally, the IT monitoring and observability market is estimated to be a $17 billion market opportunity.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.