Facebook is slowly introducing some new security features in Messenger that could have an impact in the world of chatbots, both now and in the near future.

If you read the press information, you’ll find that the change — known as end-to-end encryption — adds some safeguards to a one-to-one chat between two users. It’s interesting because the message will only appear on one device. If you have an iPad up in the bedroom and a phone on the kitchen table, the message won’t appear on both, only one. And you can set a time limit for how long the message sticks around — say, only for five minutes until it disappears forever. If this sounds a bit like the early days of disappearing Snapchats, you’d be right.

Facebook used a technology called Signal Protocol developed by Open Whisper Systems that has the stamp of approval from Edward Snowden and many others. Facebook was quick to point out in the press release that the private chats do not support GIFs, video, or financial transaction. This is a limited test with a few users; later, the social networking giant plans to roll out the option to more users over the summer, so your name may be added to the secret list soon. “During this test, we will gather feedback about the functionality, measure performance and introduce tools to enable you to report objectionable content to us,” they announced.

So what does this have to do with chatbots and artificial intelligence? Plenty. As humans, we tend to chat differently with A.I. bots than we do with normal people, like a coworker or a family member. We make some assumptions about anonymity, and we tend to speak more directly when there’s a bot involved. In my case, there are times when I will even say things that could be construed as rude or even harsh; I figure the bot won’t mind, and I enjoy testing to see if there are any A.I. routines to deal with that.

As you may have heard, chatbots are popping up all over. There are prayer bots, weather bots, and sports bots. There’s a bot that helps you fall in love. Yet we’re still in that early period when everyone is still figuring out how to make bots and figure out what they can do. In an interesting post recently, UI expert Ariel Verber noted how some activities are much faster when you do a quick Google search. We’re adjusting to life with chatbots and the chatbot developers are adjusting to life with us.

The implication here is that chatbots could one day be highly secure. I can imagine bots that do much more than chatting about our bank account. One possibility is to confide in a chatbot more than we would ever do now, even to reveal things we’ve never told anyone and then find help. We could see a chatbot as a confident, an adviser, and even a friend (as corny as that sounds) if we knew the conversation was impenetrably secure and that it was incredibly unlikely that anyone would know we are telling our deepest secrets.

But that’s just on a personal level. In business, I could see encrypted chatbots providing more value when we seek advice about conflicts at work, human relations issues, or even legal issues. A potential employee could chat with a bot in a secure environment, one that will never show up on any other device and will automatically disappear, to ask about employment opportunities without worrying whether your boss will see the chat (or whether a human on the other end will whisper about it to your brother-in-law who works at the firm).

It’s a good move for Facebook and a step in the right direction for chatbots. It means we can learn to trust them more, which is all part of their evolution (and ours).

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.