The next stage of evolution for AI is democratization. That means making it available to businesses of all sizes, and not just to companies like Microsoft, Google or Apple. The opportunity in front of us is using AI to transform how we operate our businesses, no matter the size or industry.
Contemporary AI techniques have given us magic in areas like speech recognition and image labeling, but there is much more work to be done. Think of your business and where you or your team make decisions about resource allocation. What if you could improve those decisions and optimize your consumption of those resources? How much money would you save? The next stage for AI is giving everyone the tools to find answers to questions like those.
With great power comes great responsibility
Democratization of AI means that your company’s chief security officer (CSO) will be accountable for a highly available, secure infrastructure for operating the AI that ultimately ends up supporting the business. Highly available infrastructure is necessary because if the AI is blocked or terminated for any reason, the business will suffer. Equally important is security, because if the AI were to receive bad data, either accidentally or by way of intrusion, the business may make poor decisions as a result.
The influence of AI on enterprise changes how people do things — most importantly how operations are handled at the CSO level. This role is now responsible for the integrity of the AI infrastructure that powers the business, and like the famous Spider-Man phrase says: “With great power comes great responsibility.” Of course, the fields of network and software security will change with AI as well. What is critical to understand is that the democratization of AI means something bigger — it means the CSO is at the very core of the business.
A cloud of fuzzing technology
Cloud computing is the enabler for the democratization of AI because cloud services bundle together the three things AI technology needs to thrive: big compute, big data, and big talent. I have personally worked through these steps over the past two years.
At Microsoft, we developed technology called whitebox fuzzing that permits machines to ask “what if” questions about software and look for million-dollar security bugs. Our technology discovered a third of the security-critical fuzzing bugs during the development of Windows 7, but it needed a ton of big compute and specialized knowledge that was difficult to scale with the traditional model. Our experiences prompted us to build a cloud service that makes fuzzing technology available to everyone.
Today, the Microsoft service known as Project Springfield makes it possible for anyone to use artificial intelligence to find costly bugs. This project is one example of a trend towards AI-enabled cloud services that package data, talent, and compute for cybersecurity.
Establishing legitimacy for AI-supported businesses
Cybersecurity will be responsible for the care and feeding of AI infrastructures, including aspects that we are only beginning to comprehend. We’ve all heard the expression “garbage in, garbage out,” but have you thought about how it applies to the AI supporting your business? Have you established a data poisoning security strategy to make sure an attacker can’t trick the AI into recommending the wrong decisions? If you did make the wrong decision based on that bad data, how long would it take you to find out and respond?
Data poisoning is already a concern within the cybersecurity world. For example, anti-malware depends on the signals and samples submitted by a wide array of sources, and vendors must be continually vigilant when on the lookout for attackers trying to game the system.
While our methodologies have evolved to include AI in the protection of such systems, the fight is never over. Imagine you’re the owner of a ride-hailing service, but all your drivers turn off their phones at the same time after a big event. Next thing you know, the AI system that matches riders to drivers might notice a lack of units on the road and thus concludes that there is a shortage. The AI system could then take actions, like raising the price of the rides due to less supply and higher demand. The challenge for the CSO will be to detect data poisoning, protect the business by adjusting poisoned decisions, and respond in a timely fashion with an antidote to prevent future complications.
In the end, customers need to understand and accept a business’ decisions or they will leave. As AI supports a business, to establish its legitimacy the CSO must deeply understand how the AI “thinks,” not simply the infrastructures that run it. Meeting these challenges will stretch the CSO role in ways never seen nor prepared for previously. The CSO will be tasked with helping business leaders advocate for and describe AI-supported processes to the world. It won’t be easy, but it will put security at the heart of every business.
David Molnar is a member of the IEEE, the world’s largest technical professional organization dedicated to advancing technology for the benefit of humanity.