We’ll likely never see any data on this, but judging from my own test, the reports of several users, and the lack of media interest, it seems obvious the Facebook Messenger chatbot for President Obama is a dismal failure. A Washington Post report described the bot as “labor intensive,” and few other outlets have said much about the experience, except to note that it provides another option for sending the President a letter. (He reads ten of them per day and responds to a few.)
To call the bot an “epic fail” seems harsh, but I see it as a step back in chatbot development and for the industry in general. We need more powerful and intelligent bots that use machine learning and can handle some complexity in a request. When I used the bot, the entire experience seemed like a glorified form. Sadly, that’s probably because it is a glorified form. You can even see how the programmer took this page, ensured you type in your name and address correctly, confirmed the question, and then did absolutely nothing else — such as putting your question into a category (say, personal or political) or sending you trivia about the President.
Worse, since we’re not that comfortable using chatbots yet, those behind this “form to bot” conversion forgot to think through how unsettling it is to hand over your personally identifiable information (which is governed by U.S. privacy laws). Facebook is currently testing end-to-end encryption with a subset of users, but the feature is not widely available. With this bot, there’s no guarantee given about how the information will be used — can you delete it later? Is it stored securely? The bot requests your information and then spits out a notice stating that your letter will be added to a queue, which is not exactly reassuring. It would be handy to know how a request is processed.
The most troubling issue, though, is that this is one of the most high-profile chatbots in the world, and yet it doesn’t have any inherent intelligence. It merely processes questions from the public, which will give people the impression that a chatbot is nothing more than an app that can talk. There was an opportunity here to build a chatbot for President Obama that analyzes a question and provides an answer. If someone typed “How old are you?” the bot could have sent a response (he’s 55). The Google search app on your phone can do that, and it even understands context. If you use the Google app and ask a follow-up question, like “Where was he born?” without saying who you mean, the Google app will reply that “he” was born in Honolulu. In other words, the Google app is smarter than the Obama bot.
The bot could have done so much more. It could have provided information about what President Obama has accomplished in office (or done to screw things up, depending on your political viewpoint), served up a few points of trivia about him, or found some other way to engage with the public during that brief five-minute window of opportunity. The chatbot reinforces the idea that a President is unapproachable, distant, and too busy for dialogue. A useful chatbot could help challenge that notion.
And why not use some other form of security? A simple request for information that few other people know could have worked (such as where you went to grade school or your best friend in high school). The bot could at least provide some basic security options — and a way to go back and retrieve your request again.
Honestly, it would not have taken much to improve the bot. Even taking a few extra steps beyond form replacement would have given everyday folks the sense that a chatbot is worth their time. In my view? Most people will go back to the form to write a letter. It’s much faster.