The “Internet of things” has been a Silicon Valley buzzword for the last few years, so it’s ironic that we seem to read almost as much tech news coverage about Internet of things hacking. In recent months, for example, VentureBeat has reported that the FBI is warning car makers and owners about vehicle hacking risks, and that IoT devices may be exploited as Trojan horses. And just a few days ago, we heard about a glitch at smart doorbell company Ring that exposed videos of users’ homes to other users.
Many of us in tech, long aware of hackers and malicious software, often assume that these are just the growing pains of a new platform. But for the average consumer, these are utterly terrifying threats, not just to their devices and personal data, but to their own physical safety and the safety of their loved ones. And as more examples of compromised devices accumulate — and new devices like Google’s just-announced Home increase the potential for vulnerabilities — the industry as a whole is being placed in jeopardy.
How did we get here? To fully grasp the problem, we need to understand at a basic level why so many IoT hacks are happening — and what we in tech must do now, to address them.
Silicon Valley culture contributes to IoT insecurity
As we saw during the rapid transition to mobile apps, most of the best development practices from the PC era have not carried over to IoT development. The codebase that IoT devices use is not fundamentally different from the languages we use to build websites, apps, or PC software. Despite that fact, I regularly find glaring vulnerabilities we’ve known about for decades in IoT products, even those from major companies — sending out a device with client software lacking sufficient protective measures, for example, or a device that encrypts communication between the server but fails to secure the actual device’s Internet connectivity protocols.
Again, these failures aren’t due to IoT being a cutting edge technology that developers are still learning to use; they’re due to a basic disregard for long-established traditions of safe coding. And they are leading directly to extremely dangerous and disturbing breaches in consumer safety and security, seemingly every day. To take a recent, notorious example: We all know that hackers regularly manage to hack into WiFi-connected baby monitors. Less known is that some of these hacks were possible because the devices were sending and storing sensitive data, including credentials, in plain text.
The chief culprit here is not coding but culture. In Silicon Valley, the priority is to get on the latest disruptive platform and rush to be first to market. And we seem to be collectively suffering from amnesia. We keep seeing the same security problems over and over again in the mobile ecosystem, as inexperienced teams rush their apps to market, leaving many of them vulnerable to hacking. Repeating this pattern, we regularly see IoT devices being produced by people with little or no hardware experience and scant background dealing with interaction between hardware, middleware, and software.
The market for exploitable things
All these cultural shortcomings are greatly compounded by the rapid growth of connected devices. IDG forecasts the IoT market will reach $1.7 trillion in 2020 — while in the same time frame, Gartner estimates the black market for IoT will exceed $5 billion. Indeed, the malware for IoT devices is already a thriving industry, and it’s growing increasingly sophisticated (it’s often distributed for free) every day, assisted by near-unlimited attack vectors. Any device that has access to a network can be affected via malware — meaning a single instance of malware can affect the entire series of connected devices. And due to the complex layers of the system, it’s also hard to identify where any one problem is originating.
So what can we do about it?
For the reasons I’ve outlined, it may be impossible for tech company engineering teams to reform from within, even if that’s their deepest wish. Instead, I believe it falls to the executives who lead these companies and the investors who fund them to press for change now:
Demand more audit processes: According to a recent AT&T report on IoT, less than half of the organizations developing devices analyze their security logs and alerts more than once a day — a pace that must be quickened as the risk profile rises. Worse, only 14 percent of the companies surveyed have instituted a formal audit process to help understand whether their devices are secure, and even more telling, only 17 percent involved their companies’ boards in decision-making around IoT security.
Demand continuous monitoring and automated security process: Related to the above, companies need more visibility into what their devices are doing in the wild. They need to be logging all device activity as well as classifying and immediately blocking suspicious activities and deviations. Given the massive increase in connected endpoints and the data volumes that IoT devices generate, this entire process of data monitoring, threat detection, and protection should ideally be automated.
We also, perhaps, must do some soul searching. At a certain level, Silicon Valley cherishes its renegade hackers and builds them right into our creation myths — think Jobs and Woz using a blue box to prank call the Pope. This probably has a subconscious influence on the industry, making the hacking of a self-driving car or drone seem like a cool stunt rather than a potentially life-threatening exploit. We should always celebrate feats of technical ingenuity on the right occasion and the right platform, but neither are the case here: A whole category of products, not to mention billions of customers, now hang in the balance, utterly dependent on how we respond to this challenge.
Min-Pyo Hong is CEO and founder of SEWORKS, a Qualcomm and SoftBank Ventures Korea-backed security solutions developer based in San Francisco. He has advised corporations, NGOs, and governments on digital security issues for over 20 years, and led a team of five-time finalists at DEFCON.