Check out the on-demand sessions from the Low-Code/No-Code Summit to learn how to successfully innovate and achieve efficiency by upskilling and scaling citizen developers. Watch now.
On May 26, Google and HCA Healthcare, a national hospital chain, announced a data sharing partnership that will provide the internet giant with access to a host of patient records and real-time medical information. But what is being cast by both companies as a win for improved patient treatments and outcomes is hardly a victory for consumers.
Google has a dark history of exploiting personal data for profit. Going back to at least Project Nightingale, Google has collected and monetized sensitive patient data from millions of Americans. The HCA agreement will put an enormous quantity of new patient data into Google’s hands, and some have already pointed out how similar the HCA agreement looks to data sharing arrangement that powered Project Nightingale.
But while the new HCA deal poses a major threat to the privacy of consumer’s health data, Washington D.C.’s attention has been elsewhere when it comes to data security. Since the deal was first announced, America has had to face a rising tide of ransomware crime. Data privacy laws have taken a back seat to the fight against ransomware attacks, and American consumers are being left to fend for themselves.
Our national discourse simply isn’t taking this new threat to health data privacy seriously enough. It is as if many do not perceive that threat at all. But these data sharing agreements aren’t innocent. And we need to raise the level of awareness of the real risks to act as a catalyst for greater regulatory scrutiny of such efforts.
Data sharing agreements between large corporations offer opportunities to better understand trends in patient outcomes and subsequently improve decision making for patient care. As HCA’s chief medical officer stated, the new agreement is designed to create a “central nervous system to help interpret the various signals” of patient data. This might seem like enough of a benefit to override any other concerns, but that is because those other concerns have gone largely unexamined.
Consider for a moment just what kinds of risks to patient data already exist, and the concerns consumers have about who has their data and how it is used. For example, in 2019 alone, 41.2 million healthcare records were exposed, stolen or illegally disclosed in 505 healthcare data breaches, exposing millions of individuals, as well as businesses, to the risk of having protected health information misused. It should be clear enough, then, that aggregating a number of medical records in a single entity increases the risk of exposure to illicit data access for a large number of individuals.
Privacy concerns are not just related to the fact that stolen data could potentially harm patients and consumers, however. They are also tied to the simple reality that individuals feel as though they have no say in how their personal data is acquired, stored, and used by entities with which they have not meaningfully consented to share their information.
According to the Pew Research Foundation, more than half of Americans have no clear understanding of how their data is used once it has been collected, and some 80% are concerned about how much of their data advertisers and other social media companies have collected. Generally speaking, consumers do not have a firm grasp of how their information is used, which inhibits their ability to make informed decisions about who can access their data and how they can use it.
Similar research finds that consumers feel powerless in the age of big data: Three-quarters of Americans say they have little control over the personal information collected on them, and almost nine in 10 are very concerned about their privacy when using free online tools like Facebook and Google. In other words, consumers do not believe they have much of a say when it comes to their own data privacy—and they are right.
The legitimate concerns of consumers combined with a massive and growing amount of data theft make agreements like the one between Google and HCA unwise, despite potential benefits. While the data that Google will have access to will be anonymized and secured through Google’s Cloud infrastructure, it will be stored without the consent of patients, whose deeply personal information is in question. This is because privacy laws in the United States allow hospitals to share patient information with contractors and researchers even when patients have not consented. Even when information is anonymized, taking away patients’ control over access to their own information in such a way is a deeply troubling act, no matter the potential health benefits.
Privacy concerns are often overlooked because patients and consumers do not feel appropriately equipped to safeguard their own information. And when companies can share private information without their even knowing it, how could they? It is high time for companies to prioritize the privacy of patients, and to recognize the growing threat to autonomy represented by the aggregation and sharing of large swaths of data.
While hospitals may be able to improve care with a raft of new information, leaders in these fields, and the general public, need to start asking tougher questions about how data is acquired and used, and shine a brighter light on the wisdom of sharing such information with a company that is the in the business of monetizing consumer data. A more even balance needs to be struck between innovation and privacy — and the agreement between Google and HCA will only make this aim harder to achieve.
Tom Kelly is president and CEO of IDX, a Portland, Oregon-based provider of data breach and consumer privacy services such as IDX Privacy. He is a Silicon Valley serial entrepreneur and an expert in cybersecurity technologies.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.