Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More

Sponsored by Intel

Maximizing security, Total Cost of Ownership, and Quality of Service has never been more challenging — and important. That’s especially true for governments and businesses managing highly sensitive, high-value workloads and data. New deep-stack threats and infrastructure modernization demand new approaches for peak performance, from the data center to the edge.

Government, finance, energy, healthcare, and other security-sensitive industries today must defend against a wider range of both outside and insider dangers, notes Bill Giard, CTO, Digital Transformation & Scale Solutions, Data Center Group at Intel. Targets include every level of the computing stack, including firmware, BIOS, and virtual machines (VMs).

At the same time, organizations also need simplification and cost-effective ways of sharing compute resources that don’t degrade QoS. But how?


Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.


Register Now

High-stakes reassessment underway

It’s no longer enough to keep malicious actors out of the data center or network perimeter. Unfortunately, Giard says, “software-only options aren’t adequate.” Nor are standard perimeter controls, like firewalls. Standard east-west network isolation — the transfer of data between servers within a data center — won’t stop rootkits that can hide from standard protections. And putting every high-security workload on its own bare-metal machine is a costly, inefficient, stop-gap measure. (More on that in a bit.)

Making the right choice for large-scale cloud security is also a crucial business decision. The Council of Economic Advisers says malicious cyber activity could cost the U.S. as much as $109 billion per year. IBM estimates that the global average cost per breach is $3.8 million. With so much at stake, no organization can afford to operate costly, insecure infrastructure.

As a result, many technology and business leaders are reassessing legacy methods of securing confidential data. They’re concerned that conventional defenses are ineffective, onerous, or lack scalability. Indeed, the appealing economics of hyper-converged infrastructure demand that organizations figure out a viable solution. But again, the question is exactly how?

Solutions and key principles

Some private and public sector organizations are implementing new leading-edge technology developed by Intel and Lockheed Martin. It is specifically designed for sensitive-data workloads that require high levels of protection and QoS. The long-time partners have collaborated on a hardened, full-stack virtualization platform for edge and data center systems. In production for several years, the solution now is broadly available through OEMs as part of Intel’s hardened Security offerings.

Yet many enterprises and government entities still struggle to understand the key elements and steps needed to cost-effectively protect high-value, run-time data in a virtualized environment.

Here are some important principles that can help your organization keep ahead of the rapidly changing security landscape.

Key 1: Think holistically and whole stack

Bad actors are now attacking the whole stack, so it follows that organizations need to better harden the whole stack. Piecemeal defense risks are creating gaps in crucial cyber-armor, says Adam Miller, director, New Initiatives, Lockheed Martin Missiles and Fire Control.

“From crypto-jacking to malicious insiders, IT can’t simply ‘bolt on’ security features,” he says. “To improve security in the data center, organizations can’t just deploy random products. They need to start at the processor, the foundation, then take a holistic view of the organization’s risks and establish controls.”

The reason is simple: Servers can run the most secure operating system available, but if the layers below are not validated and trusted, attacks can still succeed. So modern defenses must provide protections across the entire computing stack, from hardware to software, including hypervisors, operating systems, applications, and data. The most effective systems will work, through boot, BIOS load, and runtime, in a VM environment. An integrated approach minimizes time, cost, and complexity of evaluating and integrating hardware and software.

Key 2: Start with hardware foundations

Advanced persistent threats (APTs) use rootkits and other means to compromise low-level components, including hypervisors, boot drivers, BIOS, firmware, and even hardware, in the enterprise stack. For example, the “Shamoon” exploit (aka W32.DisTrack) attacked PC master boot records. Since then malware has only grown more sophisticated.

Security researcher Eclypsium, for instance, reports that UEFI rootkits such as LoJax enable “firmware to communicate remotely and even perform a full HTTP boot from a remote server across the internet.” The resulting implanted malware not only jeopardizes valuable IP, but threatens to undermine InfoSec credibility. Of the dangerous new classes of hardware attacks, Gartner cautions: “The underlying exploitable implementation will remain for years to come.”

Given the seriousness of the threat, it’s crucial to create a secure foundation. Servers can run the most secure OS available, but firmware layers below should be validated and deemed trusted or attacks can still succeed. Boot protection can come in various ways, but to be truly trusted it must involve hardware to enable additional software-based defenses that run higher up the stack.

Establishing hardware-enforced firewalling increases the protection of sensitive data from untrusted workloads or malware threats — helping to eliminate leakage, modification, and privilege escalation. That is why Intel-Lockheed Martin started with cryptographically isolating VMs. “It’s crucial to build foundational security that other security can rest upon,” explains Miller.

Key 3: Look beyond isolated bare metal

Organizations typically create standalone “bare-metal” systems for high-security applications. The practice, installing VMs directly on hardware, has gained traction as a way to get high performance for sensitive-data workloads; the global market continues growing by 14% a year.

Proponents say bare metal’s physical machine-level isolation provides reliable, stable, economical, and exclusive computing resources. Yet the approach also has detractors. Critics say that bare metal servers require more physical space, consume more power, and spike maintenance and support costs. Some security experts say while bare metal can effectively reduce attack surfaces, it’s a limited solution.

Intel’s Giard explains why: “If you have your top-secret or high-security application on bare metal, you’re having to build a whole new rack of system from the ground up and isolate the ports from network access, because you need to control the software running alongside it. Unfortunately, that also means you’re largely barred from the time-to-market agility you get with modern cloud-based, shared, Software Defined infrastructure and orchestration.”

True, bare metal might help you reach QoS goals by quieting “noisy neighbor” problems that impact performance. But from a TCO standpoint, it’s a bust: Each system requires a new core, new VM license, rack space, power, and other related ownership expenses, which can quickly spiral.

In contrast, modern security infrastructure consolidates multiple, complex, and dedicated legacy servers into a simplified and partitioned solution. Doing so eliminates the need to create infrastructure for each system, Giard explains. “Now, instead of having three or four systems that sit alongside themselves, you put those applications on the same system, then provision them through software, just like you would do in OpenStack or any other virtual machine environment,” he says.

A quick litmus test

Combining multiple bare metal systems helps to satisfy QoS KPIs in virtualized environments. This server consolidation saves time and reduces IT software licensing and support costs.

Giard suggests two quick litmus test questions:

  • Can you partition and isolate shared resources such as cache, cores, memory, and devices?
  • Can you provide cross-domain protection from leakage, modification, and privilege escalation?

Achieving balance

As high-security computing continues to scale out in the cloud and edge, Giard predicts that full-stack security and modern virtualization infrastructure will become an industry norm.

The approach, he says, should appeal to public and private sector entities, as well as industry OEMS and ISVs. “They can turn these services on in different ways without disrupting their pipeline and offer new structured security strategies as part of their security services. Engines can be turned out straight from the factory.” Hewlett Packard Enterprise, Mercury Systems, and Supermicro are readying offerings based on the Intel-Lockheed Martin solution.

Organizations don’t have to choose between defending devices, networks, and data centers and the consistent performance and economic benefits of modern infrastructure. Sharing resources does not have to mean sharing risk or your organization’s most sensitive and valuable assets. Bringing the security and performance of bare-metal systems to cloud and virtual infrastructure is a game changer.

Go deeper: Intel Select Solutions for Hardened Security with Lockheed Martin

Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. Content produced by our editorial team is never influenced by advertisers or sponsors in any way. For more information, contact

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.