While Google Cloud AI head Andrew Moore faces a vast array of challenges in his new gig, one of his biggest priorities remains making the power of artificial intelligence accessible to as many people as possible.

“I’m quite frightened by how many potential problems there are in the world,” said Moore. “I want as many people as possible to be well equipped with technology to go out and deal with them.”

Moore was being interviewed on stage by Nara Logics CEO Jana Eggers at VentureBeat’s Transform 2019, a conference exploring AI’s major trends. Much of the conversation turned on Moore’s emphasis on the need to connect the people who are able to do the deeply complex work to develop algorithms with people from other disciplines to drive practical, high-impact uses of AI.

His viewpoints carry a fair bit of influence in this fast-moving field. Moore previously worked with Google from 2006 to 2014 and last year rejoined the company as its Google Cloud AI head. Moore left his position as Dean of Carnegie Mellon University’s School of Computer Science late last year to take over the position from famed ImageNet creator Dr. Fei-Fei Li.

In April, Google Cloud Platform introduced its biggest initiatives since Moore and new CEO Thomas Kurian joined the company, including AI Platform, a machine learning collaboration tool for data scientists, and cross-cloud solution Anthos, which Google is using to entice more customers to adopt its cloud offering.

But a common thread throughout his works has been stressing the need to not overlook the essential role of humans in shaping AI. The more people involved in those efforts, he argues, the faster AI will be embraced and adopted.

Non-data scientists can find the technology mysterious and intimidating. But some of the most impressive uses of AI, he says, have come when outsiders get involved with the design and development process. Mentioned onstage were examples of services created in Brazil to track animal poachers and illegal logging in the rain forests.

To create more accessibility, Moore and his teams have used tools such as AutoML, which helps automate some of the work typically done by data scientists to allow people who don’t have PhDs to create AI systems. That tool helped British zoologists create an AI system that counted species out in the wild using image recognition.

“With tools that make machine learning easy, by automating more things, it lets people get creative,” he said.

By the same measure, if it becomes easier to see the impact of AI on real-world problems, that creates a new motivation for data scientists. They often spend a huge chunk of their time doing thankless grunt work to develop the underlying information and structures that lead to the creation of algorithms, with the application part coming later.

“If you watch the life of a data scientist, and what makes this person get out of bed every morning, it’s the meaning of whatever work they are doing,” Moore said.

Finally, Moore offered a word of caution for organizations considering AI, and for people thinking about getting involved in such projects. To really leverage such tools, organizations must be prepared to rethink their organizational structure and business models. And to those who might be asked to work on such efforts, Moore said they should be clear about what the goals are.

“If you find yourself in an organization where they are saying, ‘Hey, we’re going to introduce AI because our competitors are using AI,’ there is a danger they will be using AI without connecting it to a business model,” he said. “I would just walk away from a project that doesn’t know why it’s using AI.”

VentureBeat reporter Khari Johnson contributed to this story.