We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!
As part of an effort to make it easier to manage data across a hybrid cloud computing environment, IBM has unveiled a 1u all-flash storage system for on-premises IT environments that can scale to hold 1.7 petabytes (PB) of data.
IT organizations training AI models with data that for compliance and security reasons can’t be shifted to a cloud computing environment require a steadily increasing amount of storage capacity, IBM Storage general manager Denis Kennelly said.
The FlashSystem 5200i, with data capacity starting at 38TB, is the entry-level member of the IBM all-flash family of storage systems, now offered for 20% less than its predecessor. IBM is also adding additional 2u models to the FlashSystem series to deliver higher I/O performance.
The IBM storage systems are unique in that they are all compatible with the IBM Spectrum storage management software IBM makes available on its cloud, as well as Amazon Web Services (AWS), Kennelly said. IBM also committed to making IBM Spectrum Virtualize for Public Cloud software available on the Microsoft Azure Cloud in the third quarter. That capability is critical because it enables IT teams to replicate and migrate data across hybrid cloud computing environments, Kennelly added.
IBM also announced today that next month it will add support for IBM Cloud Satellite to its FlashSystem systems, as well as IBM SAN Volume Controller, IBM Elastic Storage System, and IBM Spectrum Scale software. IBM Cloud Satellite, currently in beta, is a management platform IBM created to centralize the management of hybrid cloud computing environments. IBM Cloud Satellite is built on an instance of the Red Hat OpenShift platform running on Red Hat Enterprise Linux (RHEL), which makes it possible to deploy the management platform anywhere.
In general, the ability to move data between multiple clouds and on-premises IT environments has become a critical requirement as the centers of data gravity in the enterprise continue to shift, Kennelly said. Organizations need to be able to flexibly move and replicate data that needs to be accessed by a growing number of applications running on different platforms. It’s not always feasible or practical to remotely access data when many of the applications running are increasingly latency-sensitive, thanks in part to higher reliance on microservices.
At the same time, the amount of data being accessed is increasing as organizations look to infuse AI capabilities into their applications. The AI models being constructed require access to massive amounts of data.
Collectively, all those requirements create a need to manage and govern data more efficiently than ever, Kennelly said.
“Data is the new oil for business,” Kennelly said. “But data, like kerosene, in the wrong hands is a dangerous thing.”
Ultimately, traditional IT operations will need to absorb what is today often separate DataOps and machine learning operations (MLOps) disciplines that have emerged around data science initiatives, Kennelly said. In the meantime, IBM is making a case for an approach to storage that will make it easier to achieve that goal in the longer term.
IBM is not the only provider of storage and data management platforms with similar ambitions. But now that IBM has positioned IBM Cloud as one platform among many it supports, its entire approach to hybrid cloud computing is continuing to evolve. The challenge is that hybrid cloud computing requires a lot more to achieve than simply accessing compute resources on different platforms. The data those compute engines need to access must be just as readily accessible whenever and wherever required.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.