Cloud

Big data and in-memory processing: kaizen for the data center

Image Credit: ShutterStock
How can big data and smart analytics tools ignite growth for your company? Find out at DataBeat, May 19-20 in San Francisco, from top data scientists, analysts, investors, and entrepreneurs. Register now and save $200!


This post is sponsored by SAP. As always, VentureBeat is adamant about maintaining editorial objectivity.

Late last year, database and business management software company SAP had a blow-out quarter, the company’s best in 40 years. Double-digit growth and $2.1 billion in profit: not bad at all.

One of the causes, according to the New York Times, was the company’s quick adoption of in-memory processing. But what is in-memory processing, and how does it help?

You could think of it as kaizen for the data center.

Kaizen is a Japanese-inspired process of simplifying manufacturing by removing unnecessary steps and waste, like waiting time. One of the goals in a kaizen-ified factory is to bring all source materials and parts close to the place where they are actually used.

That’s exactly what in-memory computing does.

One of the main uses for in-memory computing is the real-time manipulation of huge data sets. Let’s consider an example: Home Depot’s sales. Executives at this company don’t need to know the location of every rake, nail, lawnmower, and 2×4 for a month, or even a year, but their databases do. When executives are attempting to understand how sales are doing, or trying to forecast future earnings, or even modelling how changes in business strategy might affect sales, logistics, and revenue, they’re working with big data. This data is kind of like the raw materials entering a factory. The closer those materials are to the factory, the more efficiently the factory can produce finished goods: In this case, business analytics.

In a traditional database, the data lives on hard disks similar to the drives in your desktop or traditional laptop. Those are slow and distant, arriving at the “factory,” the server’s CPU, via wired connections. And when gigabytes or exabytes of data are needed, one of the main slowdowns on database performance is data access. The raw materials just can’t get in fast enough.

In an in-memory database, the data lives right next to the CPU, not in the next town. And it doesn’t live in hard drives that need to actually physically spin (like a CD or old cassette drive) in order to allow data to be found and read, and then transmitted. Instead, the data lives in main memory — RAM — that is instantly accessible and, in some cases, on solid-state drives like those in USB thumb disks or a MacBook Air. That makes the data much faster and more accessible to the CPU, meaning the server can churn out analyses of those Home Depot sales figures that much faster.

Since the theoretical limit for memory in a 64-bit database system is 18 billion gigabytes, there is no lack of memory capacity. Also, there is no problem holding as much raw material — data — as the company wishes, right there in the “factory.”

Which means that reports and projections requiring the analysis of hundreds of gigabytes of data can be done in seconds, not hours. And that in turn makes enterprises both faster and smarter.

Image credit: Desiree Walstra/ShutterStock

blog comments powered by Disqus