Earlier this month, Google CEO Sundar Pichai published an excellent set of AI principles intended to guide the company’s AI research. The pronouncement was prompted at least in part by Google’s controversial work on Project Maven and a general call for more accountability within the AI industry. Each of the seven points listed by Pichai serves as an excellent reference for companies working across a broad spectrum of AI technologies.
But what about AI’s direct impact on the individual — or how individualized AI will affect people and their behavior? Not all of it will be good; there is already concern that our overreliance on AI is diminishing human interactions, eroding civility, and decimating personal privacy. But it doesn’t have to be that way. As companies continue to work on both industry and consumer-facing AI technologies, it behooves developers and researchers to establish principles for maximizing the benefit of AI for the individual.
1. AI should build stronger human connections
We start with the broadest in scope. At the end of the day, AI technology should not result in people sitting alone in their rooms interacting in virtual reality or with a robot butler as their only companion. Every AI solution should be met with the question: Does it serve as an opportunity to bring people together, or does it risk serving as a mode of self-alienation? Those that serve the former purpose should be pushed forward.
For example, with an AI that learns an individual user’s tastes in clothing and can serve as a personal shopper, the key point isn’t to eliminate window shopping in Paris or taking your child to buy a prom dress — it’s to free up time spent shopping for work clothes or school uniforms in order to make time for these human moments. Since time is the most precious of commodities, an AI’s role is to create more opportunities for people to do what they enjoy, like spending time with friends and family and building meaningful human connections.
2. AI should facilitate opportunities, not diminish them
Across industries like retail, manufacturing, and even HR and stock trading, companies are automating more work using AI-based software. In fact, a report by McKinsey in 2017 found 50 percent of current work activities can be automated by 2030 with technology that has already been tested and proven to be effective. It’s clear that the job market is changing, and soon we will have to contend with how people will navigate this new, automated economic landscape. One solution is for AI developers to create opportunities for people to learn skills that will benefit this brave new world.
Companies and educational institutions can and should roll out AI-based courses to give nearly everyone access to knowledge that will prepare them for more technology-oriented roles. Another solution is for AI developers to cut users a larger piece of the profit pie by fairly compensating them for the data they use to power AI algorithms. Jaron Lanier famously mapped out such an economic model in his 2013 book Who Owns the Future? Since AI is heavily reliant on the data everyday users produce, just by using products like smartphones or apps, letting them monetize the data would introduce what Lanier refers to as a “humanistic information economy” that would sustain them and provide new economic opportunity in a world of AI automation.
3. AI developers should be mindful of personal data control
In its AI principles, Google promises to incorporate privacy design principles into its AI products — a good standard for anyone working with large amounts of personal user data. As people become more aware of the potential misuse of their data, AI developers should take a proactive step in returning data control to the individual user and be more mindful of asking for informed consent and permission for data use. But data governance and provenance is no easy feat. Fortunately, the recent explosion of blockchain technology can serve as a remedy.
More developers are looking to blockchain technology as a way to return data control back to the user with an immutable record that can track the provenance of nearly any piece of data. The goal is to allow users to control their own data, and grant specific permission for the use of their data without worrying that the institution they grant permission to may then sell that data to another party. It’s a technological way to achieve what the recently enacted GDPR laws hope to do via traditional governance. The addition of smart contracts, which serve as a blockchain’s governing laws, can be an automated way to ensure the rules agreed upon between AI developer and individual data contributor is enforced.
4. AI should maximize our access to resources
Fully personalized AI is on the horizon, and soon we will have AI that looks, talks, and even acts like us. Developers of this technology should begin pondering how and where we will use this new breed of AI. This type of AI could provide much-needed resources to impoverished people and regions in the world. For example, the U.S. is facing a potential shortage of up to 120,000 physicians by 2030, according to a recent report by the Association of American Medical Colleges. We could alleviate some of that shortage by having doctors send their AI to attend to basic intake and triaging duties, freeing the human experts to lend their expertise on more challenging issues of diagnosis and care.
Teachers, another profession where shortage disadvantages millions around the world, can potentially create personal AI that will allow the best among them to reach more children. It is critical for AI developers to look for ways to augment precious human resources, serving as a reminder that AI should not supplant humans, but augment them.
Though we can use many of the same guiding principles to dictate how developers proceed with both commercial and consumer AI technology, as AI becomes more ubiquitous in nearly every aspect of our lives, developers and companies need to constantly evaluate the potential impact of AI on individuals and their immediate communities. With advancements showing no signs of abating, it’s important to establish principles that will lead to AI that help us be better, more connected people in an increasingly digitized world.
Nikhil Jain is the CEO and cofounder of Los Angeles-based Oben, a company that develops AI-driven speech/singing, computer vision, and natural language processing for virtual reality, augmented reality, and mixed reality experiences.