Big news this month on the medical front when a surgical robot called STAR — Smart Tissue Autonomous Robot — succeeded, in both a lab setting and using live animal tissue, to stitch together pieces of pig intestinal tubing with very little guidance from humans.
Not only that, but it managed to do so with the same if not better accuracy and safety than doctors, according to the researchers behind the experiment.
This is a momentous advancement for medical robots and telesurgery alike, and with sales of such robots expected to double to $6.4 billion (a yearly increase of 10.2%) by 2020, according to an Allied Market Research report published in January, we’re only going to hear about more and more progress in the near future.
Which is beyond fantastic. Ever since the first telesurgery was performed in 2001, when a surgeon in New York successfully took out the gall bladder of a patient in Strasbourg, there has been increasing hope of one day providing excellent medical care to people who would otherwise be thousands of miles away from any surgical help. People in remote areas with no access to proper medical care, people in underprivileged nations, people in war zones, people who wouldn’t be able to survive the air transport to where the surgeon they need is located or who don’t have enough time to wait for that surgeon to be flown out to their respective location.
I have no doubt that telesurgery is the future and that medical robots (not only surgical ones) are not far from becoming ubiquitous, if not the norm in hospital care and possibly out-patient care as well.
But there’s a problem. One that we need to start fixing now, before medical robots become ubiquitous and the norm.
All these surgical robots will operate over public networks and poor connections, sometimes even wireless ones, which leaves them exposed to hacking and other types of malicious attacks.
And even though it might seem that these privacy and security concerns are better suited for a sci-fi horror movie in the SAW vein than real life, a surgical robot has already been hacked. Luckily, it happened in a controlled environment and by a team of researchers, but given that we have yet to see an Internet-connected device that can’t be hacked, this does not bode well for a future of “Paging Doctor Robot to OR 1.”
A year ago, in May 2015, University of Washington researchers led by Tamara Bonaci tested Raven II, a telesurgery robot designed to operate in extreme conditions, namely poor connections over public networks, by submitting it to cyber attacks that modified its behavior:
- They delayed, deleted, or changed the order of the commands sent to the robot
- They modified the distance the Raven II’s arm was supposed to move, as well as the rotation degree
- They performed a complete takeover of the robot.
Worried yet? Well, add this to your list of worries then: Once hacked, a surgical robot can become the victim of a denial of service attack if the hacker decides to flood the system with commands.
And while a denial of service attack can mean significant monetary loss for a company, be it in the form of clients/business lost or ransom payment made to hackers, where a surgical robot is concerned, an attack of this nature can mean loss of human life. Terrifying, isn’t it?
Or maybe not. Maybe you’re thinking that this was just an experiment and since no real-life incidents have been reported to date (which is a fact), there’s no need to panic.
But let’s not forget that these surgical and non-surgical robots operate and will continue to do so within the boundaries and the privacy and security means of the healthcare industry — an industry that is so plagued by breaches, data theft, and ransomware that IBM named 2015 “The Year of the Healthcare Breach” in its Cyber Security Intelligence Index.
And if the past months are any indication, 2016 could very well turn out to be “The Year of the Healthcare Breach – The Sequel.”
Since February, over a dozen hospitals and even more healthcare institutions have been the victim of ransomware, their systems rendered unusable, their staff forced to resort to pen, paper, and fax machines (remember those?), their urgent surgeries postponed, their patients transferred, and ultimately, their money shelled out to hackers.
After the highly publicized case of Hollywood Presbyterian Medical Center, which was forced to declare a state of emergency and pay 40 bitcoin (approx. $17,000) to regain access to its files and equipment, reports of similar attacks kept pouring in.
In March, the same ransomware — named Locky — hit Methodist Hospital in Henderson, Kentucky and left personnel unable to access patient files. Soon after that, it was reported that MedStar Health, a healthcare organization operating over 120 entities including 10 hospitals in the Baltimore–Washington area, had been attacked by some type of ransomware as well.
Add this to the fact that, as research carried out by Sergey Lozhkin at Kaspersky Lab brought to light, there are a lot of cases where medical equipment is not separated from the local office network, and all that bright future of telesurgery and medical robots looks riddled with potential breaches. And ensuing malpractice lawsuits.
So as I said, it’s high time we started thinking of a solution. It’s high time hospitals and other healthcare institutions started thinking more seriously about IT security and taking important steps towards protecting not only their patients’ data from hackers, but their patients’ robot-doctors and all other medical devices too.
The info is all out there. The regulations are too, albeit maybe not all of them at this time, but they’re definitely in the making. For example, the National Institute of Standards and Technology is set to release best practices aimed at helping hospitals deal with cyber threats in the near future. The issue is implementation. HIPAA has been around since the late ’90s, and even though it clearly states that healthcare institutions should take all measures necessary to protect the confidentiality of patient data from known threats, breaches have been a dime a dozen in the past two years, with nearly 90% of healthcare organizations having suffered at least one such incident.
The same goes for encryption. Not only is it out there, but it’s never been more available (ever since the Snowden revelations in 2013). Despite this, as a Sophos survey found, the healthcare industry has one of the lowest rates of data encryption: Only 31% of organizations reported using extensive encryption, while a whopping 20% said they didn’t use it at all.
To make a bad situation worse, it appears that the healthcare budgets for security have either remained the same or even dropped in the past year, as the Ponemon Institute shows in its Sixth Annual Benchmark Study on Privacy and Security of Healthcare.
This needs to change and fast. Medical devices, especially telesurgery ones, need to have their own networks, separated from the corporate network.
In turn, these networks need to be encrypted. They need to have bespoke VPNs and the authenticity of the parties involved in the communication needs to be mutually verified so that man-in-the-middle attacks don’t happen.
Hospitals and healthcare organizations need to hire full-time system admins and IT security professionals to train their entire staff and then test them on operational security until no one clicks on a suspicious link in an email, no matter who the sender is (or at least appears to be).
Because when it comes to surgical robots, hospitals, and the healthcare system as a whole, well-implemented and adhered to IT security best practices could mean the difference between a 1-inch incision and a 10-inch one. Between perfectly spaced out stitches and surgical complications with a long recovery time. Sometimes even between life and death.
Aike Müller is founder of Keezel. His career in information management and IT security began as an M&A IT expert at PwC. He went on to cofound a government-contract consulting firm specializing in automated assurance. As a freelance consultant, he worked on process and supply chain assurance for national and international clients in logistics, retail, and sustainable agriculture. He developed Keezel as a solution to security issues he encountered while working at client locations. You can follow him on Twitter: @themuli.