We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!
You’ve probably read that artificial intelligence is transforming medicine. While the field has yet to reach its full potential, researchers are exploring ways that machine learning, a subset of AI, can dramatically improve patient outcomes.
Machine learning is ultimately about understanding large amounts of data, making it an important tool for handling the flood of electronic data available to health care workers. Its benefits fall into a few major categories:
- Detect problems sooner
- Deliver better diagnoses when a patient reports a problem
- Predict what kinds of treatment will work, and how quickly
- Monitor the progress of that treatment
With the ubiquity of smartphones, image data is more abundant than ever. Computers are often better than humans at identifying patterns, which makes them particularly strong allies in highly visual fields such as plastic and reconstructive surgery, where they can detect, diagnose, monitor, and assess patient outcomes.
Contrary to popular opinion, plastic surgery isn’t just about aesthetics or cosmetics. In fact, plastic surgeons restore form and function over the entire body: managing burns, treating skin cancer, performing reconstruction after accidents and surgeries, and correcting congenital defects like cleft lips. AI can significantly enhance all of these procedures, in part because images and videos are already part of the plastic surgeon’s craft. They’re how plastic surgeons communicate complex problems to their colleagues, explain surgery to patients, and monitor the outcomes of operations.
AI can not only accurately assess the total surface area of a burn — essential for proper treatment — it can also predict whether a burn wound will heal without surgery. In one study, researchers used reflectance spectrometry and an artificial neural network to predict if a burn wound would take more or less than 14 days to heal.
AI used for the automatic quantification of the surface area of a burn.
Hands and arms are complex, mechanical things. The interaction of joints and muscles is hard to model well, making the area a perfect candidate for machine learning, which can help not only with treatment but with decisions affecting the patient’s quality of life after surgery.
Today, amputees generally have two options, a prosthetic limb or a hand transplant. Many criteria are involved in determining which of the two options is appropriate, and recent advancements in robotics and transplant surgery are making the choice available to a much wider number of amputees. Here again we see AI at work. Artificial neural networks have been used to design automated controllers for a variety of neuroprostheses, including those used to restore hand grasp and wrist control.
But hands are highly sophisticated, able to perceive touch, move in space, and execute precise movements like playing the piano or putting together a watch. So surgeons and engineers are also collecting data from these devices, which will facilitate better design and improved prosthetics in the future.
Craniofacial surgery involves moving the bones, muscles, and skin of the skull. Some infants are born with a congenital condition known as craniosynostosis, a disorder in bone growth caused by a premature fusing of the skull, which creates an abnormal appearance and increased pressure on the brain and which can delay growth. Babies as young as two months old may undergo major craniofacial surgery to re-shape their skull, and detecting the condition sooner is better. Plastic surgeons often use pictures of babies’ heads and CT scans to examine their skull shape and plan surgeries.
Researchers recently trained an AI to classify the shape of a baby’s skull in order to better catch early signs of craniosynostosis. This technology can be used to screen children and decrease the number of X-rays or CT scans a baby would need for diagnosis.
Plastic surgeons and dermatologists typically work together to treat skin cancer. When dermatologists detect a large cancerous growth, they will consult a plastic surgeon to cut out the lesion and reconstruct the hole that is left behind. Catching cancer early is a key to successful treatment, so using AI to improve automatic detection of skin cancers from photos is incredibly useful.
Recently, a team of researchers was able to train an AI to detect and classify skin cancer with greater accuracy than dermatologists. There are a number of studies that show the promise of AI in screening for melanoma. Sophisticated whole-body photographic scanners have been developed that could use AI to help quantify and detect dangerous skin cancers. It’s this kind of narrow, task-specific AI that holds the greatest promise for medical treatment today.
Aesthetic surgery of the face requires careful pre-operative planning and measurement of the patient’s facial dimensions. Although beauty is subjective, many people share a general intuition or feeling of what makes a face beautiful. Using large datasets of facial images, AI can assist surgeons in planning aesthetic surgeries and also in guiding patients’ choice of the best procedure.
In a recent study, researchers created an automated classifier for facial beauty — trained using extracted facial features from 165 images of “attractive” female faces that were also independently graded by human referees. In this model, a decision tree algorithm assessed a set of descriptive attributes — which in this particular investigation included different facial ratios — and attempted to determine attractive facial features most closely related to post-operative target variables.
When subjected to the testing set of images, the automated classifier was just as good as humans at assessing beauty.
A quantitative measurement of aesthetic improvements could not only set expectations but also discourage patients from undergoing procedures that offer marginal improvement.
There’s a lot of hype surrounding AI at the moment. But in medical fields with abundant data — such as plastic surgery — narrowly focused algorithms trained on images, videos, and patient histories are an important new instrument in the surgeon’s hands, offering earlier detection, better diagnosis, and improved outcomes.
Jonathan Kanevsky, M.D. is a resident in plastic and reconstructive surgery with research experience in machine learning, AR/VR, and 3D printing.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.