Artificially intelligent chatbots have dramatically changed the workplace, and they’re not going anywhere. Given the seismic nature of this change, it is crucial for forward-thinking executives to both understand this AI technology and appreciate how it changes their business’ policies and procedures.

Chatbots excel at automating tedious and repetitive tasks, freeing up employees to think strategically, innovate, and tackle more high-level tasks. But the introduction of any new technology brings with it associated complexities to appreciate — and pitfalls to avoid. One of these complexities involves the issue of sexual harassment in the workplace.

Can bots be harassed?

The first question to consider is whether a bot can be sexually harassed. Although legally speaking, one cannot sexually harass a bot, there are three reasons to think it’s still not a good idea to engage in that kind of behavior with a bot:

  • It perpetuates a sexist and sexually inappropriate culture.
  • It may make one more likely to engage in that sort of prurient behavior with a fellow human employee.
  • You could be mistaken about interacting with a bot, when actually you are interacting with a human.

Can bots harass humans?

It may seem curious to wonder about bots and sexual harassment. After all, they’re just pieces of software, not people; they can’t intentionally do things in the way that humans can. But what many people don’t know is that legally, sexual harassment does not have to be intentional, which at least opens the door to musing whether bots can perpetrate acts of sexual harassment. Given this, there are three important takeaways for executives to consider when implementing chatbots in the workplace:

  • It is vital to provide training for the end user regarding how to interact with the chatbot.
  • Your organization should delineate clear company policies for how employees are to engage with the chatbot.
  • These policies and procedures should be codified somehow, such as in the employee handbook.

Get ahead of issues

Fortunately, B2B chatbots are very safe to use. They can only really go awry when they are intentionally misused by a human, which is an inherent risk in utilizing any form of technology. Well designed and executed B2B bots are safe for the workplace for the following reasons:

  • They are trained only on professional data.
  • They are constrained to specific job tasks.
  • They even have the potential to help prevent sexual harassment in the workplace.

Even though these B2B chatbots are safe for the workplace, there are still a number of measures you should take in order to guarantee the safety and success of the bot’s implementation.

  • Know what the bot’s training and data sources are, and understand the implications those sources will have on your bot’s performance.
  • Establish policies to protect the bot against tampering and unwanted manipulation.
  • Make sure there is a human in the loop monitoring and training the bot to ensure against unexpected behavior.

And of course, remember that vigilant monitoring of any new technology is pivotal for ensuring its effective implementation. That’s why my company, Talla, always has human employees overseeing bots to ensure proper functioning.

When it comes to the matter of sexual harassment, the two best ways to ensure a smooth integration of bots into your company are: (1) be well informed about the relevant features when selecting which bot platform to employ, and (2) institute policies that protect the employees, bulwark your company from fault, and provide procedures for remediation when infractions arise.

Zach Harned writes about the ethical and legal implications of artificial intelligence for Talla, an AI-assisted service management company. He has an M.S. in clinical psychology, an M.A. in ethics, and is part of the Stanford Law Class of 2020.