Health care remains one of the fastest-growing markets for artificially intelligent (AI) applications and services, with an overall value that’s expected to reach $6.6 billion by 2021. It’s easy to see why: AI systems can analyze ultrasound scans, detect eye disease, and speed up the segmentation of X-rays and computed tomography (CT) scans. And Nvidia, for its part, aims to be at the forefront.
In an announcement timed to coincide with the Radiological Society of North America (RSNA) in Chicago this week, the Santa Clara company revealed that its Clara software development kit (SDK) — a set of graphics processing unit-accelerated computing libraries, sample applications, and more — is now generally available. It also unveiled the Transfer Learning Toolkit and AI Assisted Annotation SDK, two AI tools tailored to medical imaging workflows, and partnerships with Ohio State University and the National Institutes of Health.
The news follows on the heels of Nvidia’s partnership with Scripps Research Translational Institute (SRTI), a nonprofit research body, to develop genomics processing and analysis tools guided by artificial intelligence (AI), and with Canon Medical Systems to promote the use of AI techniques in medical and related research.
Nvidia said that more than 50 health care institutions — including Mass General and Brigham and Women’s Hospital, University of California San Francisco, the Mayo Clinic, and King’s College Hospital — have invested in Nvidia’s DGX lineup of deep learning-optimized servers and workstations, and that it’s working with over 75 medical centers, medical imaging companies, research institutes, startups, and providers to apply AI to health care.
“To bring AI to radiology across the globe, we need to get radiologists involved in the creation and adaptation of the algorithms for their patients,” Abdul Hamid Halabi, global business development lead of Nvidia’s Health Care and Life Sciences division, said. “It’s also important to give them standardized ways to share and integrate these breakthroughs with their colleagues and enable them to perform on site data analysis with less regulatory or privacy risk. Intelligent instruments and automated workflows are a reality. Nvidia is partnering with industry thought leaders to enable radiology to cross the AI chasm through the Nvidia Clara platform.”
Nvidia announced the Clara SDK in September, alongside the Clara AGX, a GPU-based architecture optimized for AI inferencing of data from 3D medical instruments. In brief, it provides developers with graphics (Vulkan and Optix), compute (CUDA, cuFFT, and cuBlas), and AI libraries (CT Recon, Volume Segmentation, Lung Detection, Render Server); example applications for image processing and rendering; and computational workflows for CT, magnetic resonance imaging (MRI), and ultrasound. In the future, it’ll also leverage containers and Kubernetes, Google’s open source container-orchestration system, to automate the deployment and management of hardware-abstracted application in embedded, on-premises, and cloud environments.
“The main benefits of Clara are acceleration,” Halabi told VentureBeat in a phone interview. “We’re increasing the efficiency of the GPUs underneath. When you have thousands of applications coming through, you really want to … be able to pool your resources as much as possible, perhaps by reusing the GPU for [model] construction. [With the SDK,] you can set up set it up in a way where you’re running 10 different AI applications on the same GPU.”
Prior to today’s announcement, a number of Nvidia’s partners — including ImFusion, Aidence, Arterys, Visage Imaging, Nuance, InferVision, Imagia, Subtle Medical, and Kheiron — tested the Clara SDK as part of a pilot program. Nuance tapped Clara to launch a marketplace — dubbed AI Marketplace, appropriately — that will serve as a hub for medical-imaging apps, while MGH and BWH Center for Clinical Data Science used it to create an Abdominal Aortic Aneurysm detection model that will be deployed on the aforementioned AI Marketplace.
“If radiology is to benefit from the thousands of new AI applications being developed, we will need to have a clear path to deployment at a broad spectrum of clinical and imaging centers. This deployment path is key to a scalable adoption of AI in Radiology,” Mark Michalski, executive director at MGH & BWH Center for Clinical Data Science, said.
Transfer Learning Toolkit
Nvidia’s Transfer Learning Toolkit, or TLT for short, tackles another pain point in AI as it relates to health care: fine-tuning and retraining models. It’s a package written in the Python programming language containing AI models that are optimized and trained on Nvidia Pascal, Volta, and Turing GPUs, with APIs designed to “accelerate deployment,” reduce the computation resources needed to build applications, and extend pretrained models to other work.
“This is something we learned from our work with self-driving cars: if you train a car in the U.K., you can’t easily drive it in the U.S. — you need to actually adapt it for where you’re going,” Halabi said, in a nod to Nvidia’s Project Maglev. “We’re realizing that there’s going to be a need, in some cases, to adopt the models to [a new patient population.] So we’re providing an SDK that allows you to take an existing model that [a] partner institute or startup created, and incrementally update the model with very little data.”
In the initial release, Nvidia is making available an AI system that won the University of Pennsylvania Perelman School of Medicine’s BrATS challenge for 3D MRI brain tumor segmentation at the 2018 International Conference On Medical Image Computing and Computer Assisted Intervention. Among the other AI models shipping are a tumor segmentation model trained on magnetic resonance imaging data, and 3D pancreas and tumor segmentation on portal venous phase CT data.
Nvidia has previously developed AI systems that generate synthetic scans of brain cancer.
TLT is available for Nvidia Tesla and DGX products, and can deployed to the Clara platform for inference.
AI Assisted Annotation Toolkit
Nvidia AI Assisted Annotation is something of a complement to TLT. It promises to speed up the process of analyzing CT or MRI scans of a patient — which normally involves hours of manually drawing, annotating, and correcting organs and abnormalities of interest — with the aid of the SDK’s AI-assisted workflows. Nvidia claims that, thanks in part to an integration with the TLT that allows it to continuously learn and improve its accuracy from newly annotated images, scan exams can be sped up by a factor of ten.
“It uses AI to help the physician annotate the datasets,” Halabi said. “What it will do is it actually bring in all the AI that you’ve already created, or somebody else has already created, then use that to annotate images, or to assist you while annotating your images … You’re able to just click on an organ … or object, and it will automatically fill in and start annotating.”
Nvidia’s collaboration with the Ohio State University Wexner Medical Center will see the academic medical center use Clara to build an in-house marketplace for clinical imaging. Nvidia claims it’ll be the first of its kind in the U.S.
In addition, OSU will integrate machine learning algorithms — like those that detect brain hemorrhage or coronary artery disease, for example — into clinical workflows such as early warning systems in ER departments and diagnostic assistants.
“The rapid adoption of artificial intelligence has opened new opportunities in medical imaging,” said Dr. Richard White, chair of the department of radiology at OSU’s Wexner Medical Center. “Working with Nvidia, we’ve streamlined the process of integrating AI into the workflow, which will lead to improved patient outcomes.”
Nvidia’s partnership with the National Institute of Health (NIH), meanwhile, will investigate AI that can streamline clinical trials for brain and liver cancer. Several Nvidia researchers will colocate with clinicians at the NIH Clinical Center to test tools that combine imaging, genomic, and clinical data to deliver treatment to cancer patients.
AI has the potential to improve the accuracy of tumor measurements and cancer staging by incorporating “novel biomarkers” beyond the size of the tumor, said Dr. Elizabeth Jones, director of the Radiology and Imaging Sciences Department at the NIH Clinical Center.
“Applying a powerful tool such as deep learning to medicine will require a truly multidisciplinary team of physicians, hospitals, and computer scientists to work together to help realize the potential of computer models for medical imaging, and to help develop predictive imaging biomarkers,” Jones said.