Insurance company Columbia Casualty, a division of industry giant CNA, recently sued a former client, Cottage Healthcare Systems, to recover a paid claim of more than $4 million. The suit claimed that Cottage hadn’t kept its security controls up to date and, when a breach occurred, tried to put the insurance company on the hook for covering the damages.

The insurer’s response? Not this time.

For years, software vendors have operated under the assumption that they can shift security risk to someone else. We’ve seen companies ship products or conduct services with known vulnerabilities, expecting the customer using the service or software to absorb the risk. The vendor, in turn, is protected from financial loss by its insurer. By shifting risk, a software company can delay fixing vulnerabilities in its code and maintain release schedules.

But the Columbia lawsuit could herald an end to this practice.

Columbia’s suit does not contend that Cottage was required to employ security experts or take any extraordinary steps. Instead, the issued policy required Cottage to maintain what are known as “Minimum Required Practices,” including replacing factory default settings in its IT environment, checking for vendor supplied security patches, and implementing those patches within 30 days (the reason for the latter requirement being to prevent known vulnerabilities from being exploited by attackers).

These are not burdensome requirements, and they don’t require complicated testing procedures. Instead, best practices state that if a vulnerability is disclosed by a vendor and a patch that fixes that vulnerability is available, the organization is required to install the patch to maintain insurance coverage.

The bottom line is that insurers now understand that not all breaches are inevitable, and that companies must do more to protect their clients. Shifting risk to customers and insurers will no longer be enough to protect organizations from financial liabilities.

Avoiding Vulnerabilities

Software and service providers who want to avoid these liabilities will need to get familiar with Minimum Required Practices. And they’ll need to be especially vigilant when it comes to open source code.

Each year, organizations use increasing amounts of open source code in software development. If older versions of open source code are used, they may contain security vulnerabilities that have been previously disclosed on NIST’s National Vulnerability Database (NVD). Black Duck’s Future of Open Source Software report shows that most organizations do a poor job of monitoring and tracking the open source used in an organization. Avoiding these vulnerabilities is as easy as matching a list of open source components (which must be understood to comply with copyright laws) against the NVD. This simple activity, coupled with replacing the vulnerable component with a patched version, would improve the security profile of the application immediately, at little cost.

Security testing typically ends when an application is released. Since the code is no longer changing (the reasoning goes), additional testing will not expose additional issues. This neglects the issue of a changing threat environment, however. When new vulnerabilities are disclosed in the open source you’re using, you need to acknowledge that your previously “secure” code is now vulnerable to a publicly disclosed attack vector. Keep track of the components you use, and check the NVD regularly for changes.

Today’s Insurance Issue is Tomorrow’s Sales Issue

Meanwhile, as insurers push back,  we can expect customers who purchase software to do the same. Vendors should expect requests and requirements for Minimum Required Practices in the products and services they deliver. If you aren’t ready for these requests (and audits), your competitors who take security seriously will use them in every sales call they make.

The details of Minimum Best Practices requirements will likely vary by customer (and the criticality of the software or hardware a vendor provides). Depending on the definition of those practices, companies may be required to spend more money on testing methodologies such as static and dynamic analysis.

A good first step is to understand what a Secure Development Lifecycle (SDLC) looks like and the various activities that it can include. There are publicly available guidelines from organizations such as BSIMM, Microsoft, and OWASP.

Mike Pittenger is VP of Product Strategy at Black Duck Software.