Health care is complicated, but Intel thinks AI has the potential to revolutionize it.

A prime example is Montefiore, a health system located in the Bronx. Physicians there use a machine learning framework called PALM, which is running on Intel hardware, to derive insights from data across disparate sources. A recent project saw two physician researchers — Dr. Michelle Gong, director of critical care research, and Dr. Parsa Mirhaji, director for health innovations — develop a system that can detect early signs of acute respiratory failure, a condition that kills more than a third of hospital patients who experience it.

VentureBeat spoke with Dr. Gong, Dr. Mirhaji, and Intel executives to get the inside scoop.

Powering Montefiore’s machine learning platform

Montefiore is one of the largest employers in New York State. It’s also one of the busiest health care complexes — hundreds of thousands of patients pass through its sprawling campuses, which include Montefiore Medical Center, the Albert Einstein College of Medicine, and Montefiore Medical Park.

Those logistical challenges catalyzed the development of Montefiore’s Patient-centered Analytical Learning Machine (PALM), a machine learning platform built from the ground up to predict and prevent life-threatening medical conditions and minimize wait times.

It runs on top of Xeon servers in Montefiore’s Yonkers datacenter, which Carey Kloss, head of AI hardware at Intel, said are tailor-made for the kind of data structures PALM’s models deal with on a daily basis. “CPUs can quickly access terabytes of memory,” Kloss told VentureBeat. “They’re a key component of a multi-architecture strategy.”

Currently, the team at Montefiore is experimenting with three hardware configurations: a server with dual octa-core Xeon processors and 768GB of system memory, 13TB of SSD storage, and 22TB of hard-disk space; an eight-server cluster with dual octa-core Xeon processors, 256GB of memory, and 12TB of traditional disk space each; and an experimental machine with 20-core Xeon Gold processors, 32GB of RAM, an 800GB SSD, and two 375GB Intel Optane SSDs.

PALM juggles lots of datasets — electronic medical records, insurance billing codes, drug databases, and clinical trial results, to name a few. And its analytical models recently expanded to handle voice, images, and sensor inputs from internet of things devices.

Collectively, it’s what’s known as a “semantic data lake,” a phrase that requires a bit of unpacking. A data lake refers to a collection of data stores — that is to say, data assets and data sources. A semantic data lake formats those storage instances in terms of a semantic graph model, making structured, semi-structured, and unstructured data easier to analyze and discover.

In concrete terms, PALM’s architecture allows researchers to draw on data from any physical server in the network regardless of its structure, including Montefiore’s ontological databases of ailments, medications, and genetic disorders.

Core to the semantic graph model are triplestores, which are a type of database optimized for filing away and retrieving triples. They’re an entity composed of subject-predicate-object — “John has tuberculosis,” for example — which PALM builds dynamically, as needed. Along the way, the system uses a frame data language, or FDL, to resolve ambiguities, like when some electronic records refer to medication by its brand instead of by its scientific name (e.g., “Advil” or “Motrin” instead of ibuprofen).

That’s only the tip of the iceberg for FDL. It can leverage broader relationships between symptoms, ailments, and prescriptions, allowing clinicians to search for patients with allergic reactions to specific medications or for people who have recently undergone a particular surgical procedure.

PALM’s final layer is the Analytic Tapestry, a supervisory system that keeps track and networks all of Montefiore’s algorithmic models. It turns those observations into insights: If it observes that a new model is reducing hospital readmissions, for instance, it will link it with others doing the same.

Arjun Bansal, vice president of Intel’s AI product group, said machine learning architectures such as PALM are particularly well-suited for health care analytics.

“They automate the monotonous work. Experts used to have to go in and tag the effects of treatments, for example,” he said. “Now there are end-to-end systems that handle those tasks in an automated fashion.”

Intel is helping to move the needle forward. Earlier this year, it worked with the Chinese government on a Xeon Scalable-powered deep learning solution that assists in the detection of two common kinds of preventable blindness, and it recently partnered with a large hospital chain to develop a model that projects cardiovascular disease risk from genomic variation data.

“We’re starting to see a lot of momentum and motivation from big health care providers,” Bansal said. “Our role is helping with the deployment phase and helping customers expand rapidly.”

Predicting respiratory failure

Acute respiratory failure (ARF) — the inability of the lungs to remove carbon dioxide from the blood, necessitating the use of a ventilator — is a ruthless killer. Roughly 35 percent of patients diagnosed with it don’t survive until hospital discharge; some studies put their risk of dying within the first six months at 50 percent.

The solution, Montefiore researchers concluded early on, is a system that warns clinicians about patients who are imminently likely to experience ARF. PALM was the perfect foundation.

“As a physician-scientist, I look at research and clinical trials to improve outcomes in patients with critical illness,” Dr. Gong said in an interview. “If we can identify these patients earlier, that’s invaluable.”

Dr. Gong worked with Mirhaji to train and validate a machine learning model that identifies patients at high risk for respiratory failure and death in hospitals. It’s called the Accurate Prediction of Prolonged Ventilation, or APPROVE for short.

Initial training data came from both Montefiore and the Mayo Clinic, and Dr. Gong said they made every effort to minimize bias. “We wanted to make the data generalizable,” Dr. Gong said. “Patient populations vary. What works well at a cancer hospital might not work well in another health system.”

Here’s the crux of it: Every four hours, APPROVE calculates an assessment score on a scale between zero and one. It considers over 40 variables, including the types of medications and therapies prescribed, lab results, and vital signs.

“It looks at [the variables] in relation to each other,” she explained. “Like most complex living organisms, changes in one can cascade through the rest. Machine learning algorithms excel in spotting these patterns.”

When APPROVE detects a patient might be at risk — i.e., reaches a predefined threshold — it issues an alert to clinicians in the form of a “best-practice advisory,” a pop-up message on patients’ digital charts with a checklist of steps (PROOFCheck) intended to prevent, or at the very least mitigate, ARF. Dr. Gong said that it captures about two-thirds of the patients who end up needing to be put on a ventilator or die in the hospital.

It took 18 months for APPROVE to enter a clinical setting, but in January 2017, it was added to the electronic medical record in Montefiore’s Wakefield Hospital in the north Bronx. In 2018, it expanded to the Jack D. Weiler Hospital and Montefiore Hospital.

Dr. Gong expressed hope that the model will reduce incidences of ARF — a trial is ongoing. But she was quick to note that APPROVE, PALM, and Montefiore’s other AI initiatives won’t leave doctors out of a job.

“However promising these technologies might be, they don’t replace a clinician,” Dr. Gong said. “Rather, they help in assisting the whole process. That’s a very important thing to keep in mind.”