Presented by Optum

Artificial intelligence (AI) used responsibly has the opportunity to live up to its promise of helping achieve what we in health care call the Quadruple Aim: better health outcomes, better patient experiences, and better care provider experiences, all at a lower total cost. However, without the proper safeguards, the use of AI can lead to unintended — and sometimes harmful — consequences.

AI is susceptible to this inadvertent harm for a simple reason: it is trained on data that reflect biases that occur in the real world. If a model that’s used to inform care decisions doesn’t account for racial or geographic health disparities, predictive AI may unintentionally perpetuate biases that affect an individual’s health or access to care. An algorithm can also inadvertently produce inequities if it’s not used as intended or if it’s applied inappropriately. To overcome these challenges, the health care industry needs to acknowledge their existence and take proactive steps to minimize them.

Fortunately, health care leaders do not view AI as a substitute for the human touch in the delivery and administration of health care. Instead, they see it as a useful tool for highly trained experts that helps them do their jobs more efficiently and effectively. At Optum, teams of data scientists work every day to help ensure that AI infused into our health system is applied responsibly, ethically, and equitably.

What’s all the hype about?

When it’s trained and deployed appropriately, AI’s advantages in clinical care are clear. Its ability to quickly analyze far greater quantities of data than is humanly possible can help in a number of ways, from simplifying appointment scheduling to identifying anomalies in medical imaging studies to powering digital triage tools. But AI-enabled capabilities can also alleviate many of the less obvious administrative headaches associated with our health system.

For example, AI can review medical documentation and help determine a hospital visit’s appropriate reimbursement status, improving efficiency. It can root out potential fraud, waste, and abuse in medical or pharmacy claims, reducing unnecessary spend. And it can help us narrow down potential new drug candidates, creating quicker access to new therapies while avoiding the costs of unsuccessful trials. The list goes on.

Its potential is seemingly endless, which reflects both the burgeoning use cases of the technology itself and the fact that we have so many systems within health care that need to be fixed or improved. The good news is that we’ve moved beyond hype — these advantages are being realized more and more each day.

For three years now, as a part of the annual Optum Survey on AI in Health Care, executives from hospitals and health systems, health plans, employers, and life sciences organizations have shared with us their attitudes and expectations related to AI in health care. This year’s big takeaway was a resounding, growing confidence in AI’s potential.

More than half — 59% — of survey respondents said they expect a return on their AI investments in under three years, nearly double the 31% who answered similarly in 2018. And this confidence is influencing their hiring decisions — 95% want to hire AI talent and 92% expect their workforce to understand how AI makes its predictions.

So, what does this all mean? Leaders from all sectors of health care are signaling that infusing AI into their businesses is a critical step toward achieving their organizations’ strategic goals. They have confidence that these investments are worthwhile, both from a financial and patient care perspective.

Working toward more equitable health outcomes

While optimistic about AI’s benefits, health care leaders also expressed concerns about its potential to perpetuate inequities.

Three out of four executives said they were wary about bias creeping into AI’s results, whether because of the algorithms embedded in the technology or because of how the algorithm is used. This concern was especially prevalent within organizations that have not yet implemented AI (79%), but also occurred among those currently utilizing AI (66%).

A perceived lack of transparency is also a worry. Seventy-three percent of respondents were concerned about the “black box” nature of AI results — meaning it is not always clear which combination of parameters is driving a model’s recommendations or a model’s efficacy.

Both of these concerns stem from how predictive algorithms work. As we mentioned earlier, historical patient data reflect historical inequities that, left unchecked, may disadvantage some populations. And the uncertainty clouding the explainability of the model’s output is a direct result of how machine learning algorithms ingest data and form their own connections to create inferences. In a field that has long prized evidence-based decision-making, that can be a tough hurdle to overcome.

To help ensure AI doesn’t perpetuate inequities, health care leaders are doing two things. First, their teams are using social determinants of health (SDOH) data to provide insight into where and how people live, work, learn, and play. Leaders are hoping it will help them identify the complex factors that affect health outcomes.

Second, whenever possible, they’re building explainable interfaces (e.g., conversational user interfaces) into their platforms to better understand what’s influencing the output. Greater transparency can help human experts ensure that the model does not inadvertently favor one group or geography over another.

By feeding more complete data into their AI algorithms and adding explainability, health care executives hope to avoid and combat bias. At health care organizations that either utilize AI or plan to, almost every leader surveyed (96%) perceives AI as an important tool to help achieve health equity.

Responsible use means more than just ethical data science

To unlock the advantages of artificial intelligence in an equitable and sustainable way, complete data and transparency are only part of the solution. The responsible use of advanced analytics requires awareness of the strengths and limitations of data, AI methodology, and the application of AI results. No data set is perfect, especially when it comes to minority populations, who are historically underrepresented in widely available data types. Data reflects the biases that occur in the real world, but we can use technology to help overcome them. There are tools that can evaluate fairness in machine learning models, like Aequitas. We use this open-sourced tool to help assess models for bias and identify the impact on vulnerable groups.
Organizations need to be vigilant in recognizing the limitations of incomplete data, and therefore, the limitations of AI. And they need to train decision-makers to be sensitive to these issues, especially in an industry where the purpose is to deliver care and support health for real people. Better understanding inequities and their connections to health is a first step toward addressing them.

At Optum, we are conducting research to better understand the sources of systemic bias in health care delivery that impact outcomes — for example, race corrections in clinical guidelines. This knowledge helps us be better-informed consumers of AI-derived results and more aware of the sources and risks of perpetuating bias.

As our health care becomes more digitized and data-based, more equitable outcomes will also be dependent on the geographic equity of digital connectivity. Expanding high-speed internet to underserved rural areas will enable easier connections. Easing state licensure requirements will also help, so that a patient in rural Alabama can use video to connect with his clinician, even if she is practicing medicine in New York City.

A rising tide lifts all boats

Just as AI-powered solutions can lead to more efficient, user-friendly experiences and better health outcomes, they can also produce a more equitable system. Today, experts in public health are using machine learning systems to help remove barriers to care in underserved communities.

AI care coordination platforms are alerting care management teams within health plans and health systems about patients and populations that are in need, regardless of the zip code in which they live.

Digital health apps on smartphones — which have become ubiquitous among low-income communities — are connecting people with programs and services they qualify for. They offer nudges to help them improve their behavior and their health.

These solutions that use AI to create a path to better health equity are just a few examples of how AI is offering increased insights for health care leaders. As AI becomes more transparent, as data becomes more inclusive, and as decision-makers are better trained to address limitations in technology, the pursuit of health care’s Quadruple Aim will continue to accelerate. That means better performance for organizations across the health care sector — and better health outcomes for all of us.

Dig deeper: Read more about health care executive attitudes about artificial intelligence, and its growing impact in health care, at

Margaret (Meg) Good, PhD, Vice President, Optum Enterprise Analytics

Dr. Margaret (Meg) Good specializes in health economics, health policy, and survey research methods. In her role, Dr. Good advises Optum businesses on how to use analytics and artificial intelligence to achieve strategic objectives for their products and services. Prior to joining Optum, she was a faculty member in the Department of Public Policy at the University of Maryland, Baltimore County where she taught courses in health policy and research methods. She also worked at the University of Minnesota in a research collaborative that helped states expand access to health insurance and health coverage among disadvantaged populations. Dr. Good earned her PhD and MS in health services research and policy at the University of Minnesota and her undergraduate degree at Williams College.

Kerrie Holley, Senior Vice President and Technology Fellow, Optum

Kerrie Holley joined Optum as its first technology fellow, focused on advancing the enterprise’s capabilities in AI, machine learning, deep learning, graph technologies, the Internet of Things, blockchain, virtual assistants and genomics. Prior to Optum, Holley was the VP and CTO of analytics and automation at Cisco. He spent the bulk of his career at IBM where he was a fellow and master inventor, focused on scalable services and cognitive computing. Holley was IBM’s first African American distinguished engineer and a member of the Academy of Technology comprising the top 300 technologists. He holds a JD in law and a Bachelor’s in mathematics from DePaul University.

Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. Content produced by our editorial team is never influenced by advertisers or sponsors in any way. For more information, contact