Interested in learning what's next for the gaming industry? Join gaming executives to discuss emerging parts of the industry this October at GamesBeat Summit Next. Register today.
More than half a million people are diagnosed with cancers of the head and neck each year, many of whom choose to undergo radiotherapy. But it’s a delicate process: The surrounding tissue can be severely damaged if it isn’t carefully isolated prior to treatments.
In partnership with the University College London Hospital, Google subsidiary DeepMind is exploring ways artificial intelligence (AI) can aid in the segmentation process. It today announced a significant step forward in the pursuit of that vision: validation of a model that exhibits “near-human performance” on CT scans.
“Automated … segmentation has the potential to address these challenges but, to date, performance of available solutions in clinical practice has proven inferior to that of expert human operators,” the researchers wrote. “In recent years, deep learning based algorithms have proven capable of delivering substantially better performance than traditional segmentation algorithms.”
Segmentation performed by humans can be inconsistent and imperfect, they note. It’s also typically time-consuming — experts can spend four hours or more on a single case.
MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.
In a paper (“Deep learning to achieve clinically applicable segmentation of head and neck anatomy for radiotherapy”) to be presented at the Medical Image Computing & Computer Assisted Intervention conference on Sunday, the DeepMind team described a three-dimensional U-Net architecture trained with 663 tomography scans of 21 organs (larynx, tongue, nasal cavity, connective and soft tissue, etc.) sourced from head and neck cancer patients at the University College London Hospitals NHS Foundation Trust (UCLH). Training took less than 30 seconds on a single GPU, they noted.
On 19 of the 21 organs, there wasn’t a substantial (5 percent or more variation) difference in performance between the deep learning model and a team of therapeutic radiographers with several years experience. (The two exceptions were the brainstem and right lens.)
Furthermore, on an independent test set of CT scans from 24 patients (from The Cancer Imaging Archive) collected at sites the model hadn’t seen previously, there was “no substantial [gap]” between results obtained by the model and radiographers on any individual patient.
The next phase of research will test the AI system’s performance in a clinical environment.
The team believes it has the potential to reduce the lag time between diagnosis and treatment, and to cut down on the time it takes to adapt procedures as the tumor shrinks, a process known as adaptive radiotherapy.
“Increasing demands for and shortages of trained staff already place a heavy burden on healthcare systems, which can lead to long delays for patients as radiotherapy is planned,” they wrote. “As well as changing patients’ lives, this research could also free up time for the clinicians who treat them, meaning they get to spend more time on patient care, education and research.”
DeepMind is involved in several health-related AI projects, including an ongoing trial at the U.S. Department of Veterans Affairs that seeks to predict when patients’ conditions will deteriorate during a hospital stay. Previously, it partnered with the U.K.’s National Health Service to develop an algorithm that could search for early signs of blindness, and to improve breast cancer detection by applying machine learning to mammography.
Google more broadly has invested heavily in health care applications of AI. This spring, the Mountain View company’s Medical Brain team said they’d created an AI system that could predict the likelihood of hospital readmission, which they used in June to forecast mortality rate at two hospitals with 90 percent accuracy.
In February, scientists from Google and Verily Life Sciences, its health-tech subsidiary, created a machine learning network that could accurately deduce basic information about a person, including their age and blood pressure, and whether they were at risk of suffering a major cardiac event like a heart attack.
Separately, Verily is developing automated systems to tackle sleep apnea, pharmaceutical drug discovery, blood collection, and health insurance.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.