An AI system developed by researchers at Google and the University of California, San Francisco anticipates physician’s prescribing decisions 75% of the time, according to a paper published in the journal Clinical Pharmacology and Therapeutics. If someday applied to a health care system, it could identify prescriptions that look abnormal for patients and their situations, similar to the fraud detection schemes used by credit card companies.
“While no doctor, nurse, or pharmacist wants to make a mistake that harms a patient, research shows that 2% of hospitalized patients experience serious preventable medication-related incidents that can be life-threatening, cause permanent harm, or result in death,” wrote research scientist Kathryn Rough and Google Health MD Alvin Rajkomar in a blog post. “However, determining which medications are appropriate for any given patient at any given time is complex — doctors and pharmacists train for years before acquiring the skill.”
To this end, the AI system trained on a data set containing approximately three million medication orders from over 100,000 hospitalizations, using retrospective electronic health record data identified by randomly shifting dates and removing portions of the record in accordance with HIPAA (including names, addresses, contact details, record numbers, physician names, free-text notes, images, and more). Importantly, the data set wasn’t restricted to a particular disease or therapeutic area, which made the task more challenging but also helped to ensure the model could identify a larger variety of conditions.
The researchers evaluated two models: (1) a long short-term memory (LSTM) recurrent neural network that learned to model long-term dependencies, and (2) a logistic model like the type commonly used in clinical health research. Both were compared with a baseline that ranked the most frequently ordered medication based on a patient’s hospital service (e.g., General Medical, General Surgical, Obstetrics, Cardiology) and amount of time since admission. Each time a medication was ordered in the retrospective data, the models ranked a list of 990 possible medications, and the researchers assessed whether the models assigned high probabilities to the medications doctors actually ordered in each case.
Each model’s performance was evaluated by comparing its ranked choices against the medications that the physician actually prescribed. The best-performing was the LSTM — 93% of its top-10 lists contained at least one medication that would be ordered by clinicians for the given patient within the next day. In 55% of the cases, the model correctly placed medications prescribed by the doctor as one of the top-10 most likely medications, and 75% of ordered medications were ranked in the top-25.
“It’s important to remember that models trained this way reproduce physician behavior as it appears in historical data, and have not learned optimal prescribing patterns, how these medications might work, or what side effects might occur. In our next phase of research, we will examine under which circumstances these models are useful for finding medication errors that could harm patients,” wrote the researchers. “We look forward to collaborating with doctors, pharmacists, other clinicians, and patients as we continue research to quantify whether models like this one are capable of catching errors, keeping patients safe in the hospital.”
Google’s work in AI applied to health care is extensive, to say the least. The tech giant has developed models that classify chest X-rays with “human-level” accuracy, and it’s proposed hybrid approaches to AI transfer learning for medical imaging. Last year, Google claimed its lung cancer detection AI outperformed six human radiologists and that its skin condition-diagnosing model detected 26 skin conditions as accurately as dermatologists. More recently, the company said it had trained an AI model to identify breast cancer in mammography imagery with fewer false positives. And it worked with Aravind Eye Hospital in Madurai, India, to deploy a machine learning model that could diagnose eye diseases from retinal images.
“The same accuracy [as] a much more invasive blood test, now you can do that with retinal images. There’s a real hope this could be a new kind of thing — [when] you go to the doctor, they’ll take a picture of your eye, and we’ll have a longitudinal history of your eye and be able to learn new things from that,” said Google AI chief Jeff Dean of the diagnostic eye model. “That’s kind of the gold standard of care. [W]ith good, high-quality training data, you can train a model and get the effects of retinal ophthalmologists.”
VentureBeatVentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:
- up-to-date information on the subjects of interest to you
- our newsletters
- gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
- networking features, and more