Last year, the first season of HBO’s Westworld concluded as most stories about robots do: with the machine eliminating its maker.

This shouldn’t have been surprising. Across Hollywood, from Terminator to Ex Machina, the first thing artificially intelligent robots seem to do once they gain consciousness is go rogue.

It makes for a cool ending, but if the writers of Westworld‘s season 2 want to aim for more science and less fiction, they might consider having their “hosts” not eliminate humans, but help them to prevent cyberattacks or improve cancer treatments. And what if the show’s humans didn’t live in fear of robots, but instead set some ground rules for how AI technology can be used meaningfully, for the benefit of all?

Because that is what’s happening in the real world.

While the idea of a bad robot remains purely science fiction, I’m happy to report that computing capabilities with the ability to “think” and “learn” are no longer just a part of our Sunday night programming. It’s here today. And it’s up to all of us to help make sure these technologies are used responsibly, transparently, and to everyone’s advantage.

Cognitive systems can understand complicated forms of data, like images and human language. And because this data is available easily through the cloud and on an open platform, cognitive systems are transforming entire industries from financial services to health care. Doctors, for example, are using Watson to help deliver evidence-based treatment options to cancer patients and will soon have access to cognitive technologies to help interpret MRIs and other medical images.

Discovering a new technology is half the battle; you have to bring it thoughtfully into the world, too. Indeed, for a technology to become pervasive, society needs to establish a set of guiding principles for its use. (How would the invention of the car worked out if, for instance, no one had advocated for driving on one set side of the road?)

I believe that these three Principles for a Cognitive Era represent the most important guidelines for us and the broader technology, business, and academic communities to follow.

1. AI must augment, not replace, human intelligence

When people think about what cognitive technology can do, especially in terms of reading and analyzing data, they can easily get preoccupied by the idea that these systems can do things better than humans can. Cognitive technologies should not replace human thinking.

Instead, AI in partnership with professionals is enhancing human intelligence in countless ways. For example, health care is the most complex industry, and it contains the biggest data sets imaginable. So health care organizations like Memorial-Sloan Kettering, Mayo Clinic, Medtronic, and others are using AI technologies to improve and personalize patient care. Meanwhile, in the legal industry, ROSS Intelligence is using cognitive to assist in legal research. And, in the environmental field, OmniEarth is using the technology to help improve water conservation with organizations and local governments in California.

These examples demonstrate that AI technology does not excel in ruling over humans; rather, it excels when it is used with humans.

2. AI must be trustworthy and transparent

In order for humans to use cognitive technology to the best of its ability, we need to have confidence in AI-enabled systems. Trust in AI starts at the source, and that’s why I believe in making AI transparent. We must make clear when and for what purposes AI is being applied in the cognitive technologies we develop.

This includes practicing transparency about the major sources of data that inform cognitive insights, as well as the methods used to train cognitive systems. For many, if not all, companies, data is a critical part of their value proposition, and thus companies must retain ownership of their business models and intellectual property, especially their data.

3. Everyone deserves a chance to participate in the cognitive economy

Everyone should have the opportunity to benefit from cognitive technology. After all, AI technologies are being built and deployed with the goal of enhancing human intelligence. Their full capabilities will not be realized unless we can all work with these tools effectively.

Too often the view of AI technologies is that of job killer, when in fact cognitive computing technologies will be a key factor in revitalizing traditional industries and jobs. Meaningful job creation is not about white collar vs. blue collar jobs, but about the “new collar” jobs that employers in many industries demand. These jobs remain largely unfilled, while at the same time industries from manufacturing to agriculture are reshaped by technology. Entirely new jobs are being created where major enterprises deploy AI to augment and amplify human intelligence, rather than replace people. For example, new careers include server technicians and database managers, as well as new roles in cybersecurity, data science, health care, and financial services.

Tech leaders must commit to working with students and the workforce to develop the new skills and knowledge that will help them to work effectively with new technology, whether it’s through partnerships with universities around the world or disruptive education methods that will better prepare young people for college and cultivate the science, technology, engineering, and math (STEM) skills that underpin some of the nation’s fastest growing industries and new collar careers.

It isn’t enough to only apply our brainpower in the lab — we must also think about how our technology will be offered and used out in the world.

In other words, the cognitive era needs not just engineers or policymakers, not just scientists or philosophers, but all of them. We need what we at IBM call “thoughtful pioneers.” I encourage you to join us.