Sexual harassment has made plenty of headlines in recent months, but more often than not, bad behavior doesn’t happen in the spotlight. As Oprah said in her Golden Globes Lifetime Achievement speech, sexual harassment happens in factories and in fields everywhere, to people of all colors, classes, and creeds.
In the modern office, there is one type of harassment that HR teams could solve using the very same innovation that enables it. I’m speaking, of course, of the kind that happens regularly online over Slack, Skype, or work computers. For sexual harassment that occurs in cyberspace — and, like bullying, it often does — technological solutions like AI could be the savior victims didn’t know they needed.
Sexual harassment goes online
The term “sexual harassment” wasn’t coined until the ’70s, but the act of sexual harassment was a problem long before that time. While it still occurs openly in particularly toxic cultures, by and large, offenders have simply become more private in their tactics, whether via button-under-the-desk schemes or other encounters designed not to be seen or heard.
The virtual world seems to provide a safe space for abusers, but there’s a large caveat. What happens online is permanent, which means a simple screenshot could report exactly what a person jokes about or requests through emails and DMs. However, this only works if the victim of sexual harassment is comfortable with capturing and sharing the messages, which is a big if.
As it stands, 75 percent of sexual harassment incidents in the workplace go unreported, and employees who do report them often experience retaliation. But artificial intelligence could identify and report such instances automatically to help eliminate the issue of unreported harassment. Smart software of this nature could, in theory, recognize inappropriate messages and send them directly to HR. This would provide unprecedented support for individuals on the receiving end of harassment.
There are obvious drawbacks to this scenario, as described in the New York Post last July — censorship, false alarms, and privacy chief among them. But employees should know they have little right to privacy on work computers whether or not there’s a bot involved. In fact, 66 percent of major companies included in a survey by the American Management Association said they monitor employees’ internet use.
Tech comes to the rescue
There are various startups and organizations developing AI to combat harassment at work. Botler.ai, for instance, uses natural language processing to scan online conversations and give its users an understanding of whether what they’ve experienced violates U.S. criminal code or Canadian law. In this way, the software acts as legal counsel that can help potential victims of harassment determine whether an incident has broken any employment laws. In addition to this, AI algorithms can record data and determine patterns in sentence structure to flag content when it’s deemed inappropriate.
Many companies have implemented similar AI solutions already. An article on Workforce.com dives deeper into the issue, claiming “AI use will only grow in the workplace and outside of it” and “HR will need to be prepared on how to handle the data.”
Another example of a company creating AI to combat inappropriate sexual behavior, in this case on college campuses, is Callisto. Online systems like this one can detect repeat offenders and empower harassment victims to make reporting decisions. The technology can save time-sensitive written records of an incident, notify users if another victim has named the same perpetrator, and report the incident electronically.
If implemented regularly in the workplace, non-biased machine computing could help HR teams identify intimidating power tactics and ingrained patriarchal attitudes that stifle personal development and performance in the workplace.
AI intervenes in real life
While AI may be most effective in a virtual setting, it’s not limited to that realm. AI tools that can help identify sexual harassment in open conversations include digital assistants like Amazon’s Alexa, which can already recognize certain words and phrases said aloud.
As tech conglomerates reprogram and update their AIs, we can expect digital assistants to learn to recognize inappropriate sexual remarks and slurs and monitor actions that are inappropriate in the workplace with unbiased programming that accounts for and processes all forms of sexual misconduct. Most companies already use video surveillance and provide written anti-sexual misconduct policies, but AI assistants may be the first step in identifying predators even before an employee makes a report. For example, if an AI could pick up on trigger words or phrases, it could alert employers immediately when harassment takes place.
Although there are limitations regarding privacy, features like these are bound to become more popular, especially in the #MeToo era. These devices not only safeguard the workspace, but also save time by efficiently identifying incidents and streamlining reporting techniques.
Prepare for AI assistance
When professionals who’ve experienced sexual harassment in the workplace know they’re not alone, it can help them process unjust experiences ranging from uncomfortable to traumatic. Even if it means employing bots as babysitters, the fact that businesses can use AI to build better work environments is a step toward dismantling discriminatory practices.
In time, this will help address the system and troubleshoot inappropriate actions with intelligent insight. No longer do victims need to feel boxed into their cubicles. They now have a technology-fueled “AI-triarchy” on their side to help them achieve safety and equality.
Debrah Lee Charatan is the cofounder, principal, and president of BCB Property Management.