Conversations about artificial intelligence (AI) get caught up in the idea that machines can do it all. But whether it’s picking out groceries or suggesting who to hire, there’s an inherent risk in placing blind trust in an algorithm. Luckily, the conversations are starting to shift.

Last week, Google announced its new PAIR initiative (People + AI Research) — an effort to push the business world and society to consider the biases in AI. To foster a better understanding of what informs AI insights, Google is releasing a set of open source tools to give engineers a clear view into the data that powers their algorithms.

It’s good to see Google go down this path. But as advanced and “unbiased” as algorithms may get, people will need more from context to help them evaluate AI-powered recommendations.

Let’s consider a brief analogy with journalism: Good journalists discuss and shape public opinion by presenting and connecting facts, while ultimately allowing readers to draw their own conclusions. In a perfect world, users of AI should have a similar experience.

AI users should be given information on how the insights and suggestions presented to them were created — details from the source of the data to the author of the code underlying it. For example, does the HR manager using AI to calculate a job offer know whether the algorithm corrects for average salary disparities between men and women?

In journalism, there are systems in place to bring all inherent biases to a reader’s attention. What publication does this article come from? Who wrote it? What is the reporter’s bio? How did the reporter find these facts? What sources did they use, and if they aren’t listed, how were the facts corroborated?

Before these types of contextual clues are offered to AI users, there’s a lot of work to be done. Organizations like The Ethics and Governance of Artificial Intelligence Fund are working to accelerate research around the social and cultural implications of AI, but reaching a point where users can pull back the curtain entirely is still at arm’s length. Ideally, every AI-generated recommendation will arrive backed up by a set of accessible metadata that provides insight into the potential biases at play. This may appear in the form of a push-notification disclaimer or verbal warning from Alexa. Unfortunately, I believe there are a number of years ahead of us before we reach this level of transparency.

Without these explicit contextual clues, researchers may chip away at gender, class, and racial biases embedded in AI, but it won’t function independently of people and their common sense.

What’s interesting is that the human brain is already connecting the dots to correct AI that lacks context. When people see AI-powered recommendations, react to them, and make real-time adjustments, they’re creating a feedback loop that makes AI even stronger than before. Every time a person redirects an AI system — like choosing a different route than the one your Waze app suggests — they train it to provide better and more reliable recommendations in the future. By maintaining this level of control over AI’s impact on our life, and what the technology learns, we’re contributing to a more productive AI ecosystem all the same.

Over time, AI will start to provide insight into more complex topics. Beyond picking out your groceries or new shoes, it will select which college to attend, job candidate to hire, or medicine to take based on its knowledge of your history. When this day comes, AI might spread across every industry and support entire professions, including doctors, technicians, manufacturers, and educators. Understanding the source of AI’s insight will become imperative, and leaning on people to take action accordingly will make high-stakes recommendations more stable over time.

The never-ending initiative

Thanks to Google’s PAIR Initiative, the data behind AI will become more reliable and will ultimately make our decisions to trust AI a little easier. But the people interacting with AI-generated information remain the most crucial and irreplaceable part of the equation.

The datasets powering the AI ecosystem may improve, but a healthy skepticism should inform how we interact with AI now and into the future. The journey toward global deployment of a completely objective technology may never be complete. But the more AI can connect people to the source of its insight, the more accurate and trustworthy it will become. As someone exploring AI’s business applications, I’m excited to see how close we can get.

Tim Lang is senior executive vice president and CTO at MicroStrategy Incorporated.