Presented by Western Digital

Just five years ago, automotive industry analysts hailed the launch of the Vehicle 2.0 era, the very first step away from the hardware-driven automobile, in which software and hardware were essentially inextricable, toward a more distributed, digital model. Today we’re on the cusp of the Vehicle 4.0 era, a completely software-defined vehicle set to offer opportunities for automakers and OEMs alike.

“It’s similar to the move from a flip phone to a smartphone,” says Russ Ruben, director of automotive and emerging segment marketing at Western Digital. “We’re going from a vehicle that comes as-is, to one that acts as a hardware platform for applications that can be updated independently, over the air. It can be the car you buy today and could keep its value for close to a decade, because it can be updated continuously with the latest and greatest software and firmware.”

The evolution of automotive technology, from hardware-first Vehicle 1.0 through to tomorrow’s software-defined Vehicle 4.0.

While we’re still perhaps a decade from fully autonomous vehicles, Vehicle 4.0 is set to shake up an automotive industry with innovative architectural changes, dynamic data processing between vehicle, edge and cloud, the ability to make continuous software and firmware updates, modularized hardware for serviceability and more. Here’s a look at what’s to come.

From hardware to software

Back in the day, software and hardware were closely coupled in a vehicle. Features were developed and implemented in conjunction with underlying hardware, and software was essentially a single system, with dependencies between components that made updating individual software functions difficult-to-impossible. This means hardware drivers and system services are unique to each system, causing hardware complexity and fragmentation, maintenance and costly upgrades.

A software-defined car is a whole new ballgame. Hardware and software are not truly coupled; applications interact with hardware services or message other applications with hardware-agnostic interfaces like APIs and middleware. That means OEMs can develop independent applications to address specific functions or services, and it becomes far easier to update or add software because these applications are self-contained.

Consider, for example, advanced driver-assist systems (ADAS), which require significant hardware mainly in the form of sensors. But how that sensory data is processed and acted upon needs to be continuously improved, which can be done with regular software and algorithm updates, making the vehicle safer over the length of its life, without needing to change the hardware.  

With an abstraction layer providing services that map hardware-specific functions and data to hardware-agonistic ones, the hardware fragmentation that comes with software incompatibility, is reduced, or even eliminated.

This means we’ll see a physical restructuring, as the hardware and software becomes assigned to zones, or specific domains throughout the car, arrayed around a central computer.

Each zone acts as a gateway to distribute data and electric power. The physical location of each zone controller determines what input/output (I/O) it handles — a zone at the front may handle front-facing sensors, for instance  — freeing the software in the domain controller to focus on higher-level compute functions. The ethernet standard will become the electronic/electrical backbone of the system, which will scale as the bandwidth needed for in-car networking grows, while reducing connectivity cost and cabling weight.

Third parties will be able to create new and innovative entertainment apps that run on the infotainment systems, while OEMs will be able to continuously upgrade safety, connectivity, efficiency applications like high-def 3D maps and ADAS with over-the-air (OTA) updates, as well as enable vehicle-to-everything communication (V2X), all of which requires on-board data storage.

Understanding the impact on data storage

Over the next decade, as we ramp up to Vehicle 4.0, and eventually fully autonomous driving, cars will essentially become rolling data servers. And as OEMs add connectivity and advanced driver assistance systems, and AI becomes ubiquitous, auto functions will require more and more compute, and some estimates suggest it could require up to 11TB of storage.

To fully realize the promise of the software-defined vehicle those systems all need to be operating in real time, on board and independent of the cloud, with higher performance and higher capacity storage that allow the data to move quickly, Ruben says.

That will require a whole new storage architecture. Rather than stand-alone storage for each system, multiple applications will access a central storage device, which will need to be high-performing, with increased capacity and features like SR-IOV for data storage sharing across multiple processors, reduced latency and encryption. And as the vehicle gets closer and closer to driving itself, high quality and high reliability will become even more critical.

“For the next 10 years, there will be continued advances in the architecture of the vehicle, which will then drive the changes needed in the interfaces with the storage devices,” Ruben says. “The driverless car might be a decade away, but storage needs to keep pace with the products, performance and capacity points required as Vehicle 4.0 evolves.”

Developing storage for the future

“Software-defined vehicles pose a challenge for the car manufacturers and the tier-one suppliers that are currently developing new systems,” Ruben says. “They require a long-term data strategy — and storage can’t be an afterthought. You are susceptible to having issues when you do that.”

The question is how data storage requirements will change over the coming decade — manufacturers might know how much storage they need today, but in the future, as more software and more features are added, how large will storage need to become, and what will the workloads look like? The longevity of a data storage device depends on how it’s used, whether it is a write-intensive or read-intensive application.

Pinning down future workloads and capacity needs is challenging, particularly now, with innovation across the auto industry still surging forward. But there are options to ensure that auto makers are ready to meet upcoming demands, Ruben says: a move to higher-capacity points or a modular architecture.

Ruben suggests going at least one or two capacity points higher than what’s currently necessary. A modular architecture for serviceability, in combination with a health monitor, means adding a data storage daughter card. If the card begins to wear out or is running out of capacity, an owner can simply have it swapped out at the auto shop or if capacity no longer is sufficient the owner can upgrade to a higher-capacity device.

To get it right, hardware and software developers have to talk to each other rather than operate in silos — hardware has to be designed according to how the software will function. Of course, data storage might not be priority number one in an architect’s mind — typically the chipset is considered first, but data storage should be next on that list of priorities.

“Our message is always to make sure you understand your workloads,” Ruben says. “You need to understand that up front, because it will impact your system architecture, the capacity points you’re going to need, and you’ll need to change your strategy accordingly. We’re evolving with the chipset vendors to stay in sync, and ensure we are supporting the protocols and interfaces necessary for vehicles now, and the vehicles on the horizon.”

To learn more about the evolution toward a more adaptable, software driven vehicle and the opportunities and challenges ahead, visit

Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. For more information, contact