How do you actually become friends with someone? Is it a slow process of getting to know each other or is there a moment when you just hit it off? Do we enjoy staying friends with people who have become part of our lives or does there come a point when we just let them go?
I was Facebook friends with a woman named Claire when I was in college and we used to discuss sci-fi movies together. Though we’ve lost touch, I miss the friendship we had developed over that very specific topic of conversation.
As a part of my ongoing research to understand AI-driven product design, I’ve found that looking through the scope of social psychology provides a very interesting way of understanding the world. Ten people can read the same questions and answer them very differently.
There is no exact algorithm that explains how people make friends. It is effectively a function of the personalities of the two people involved and the time they spend together. Often, their interactions reflect the experiences they’ve gone through together.
We live in the age of not only personalization but a very aggressive, down-your-throat form of it. Everywhere we go — from shopping on Amazon to browsing movies on Netflix — personalization is one of the key, magical components that promises to improve our experience.
I’ve found that this trend toward personalization derives from our own intricate need to be understood. Humans are a very social species. At some point we have to wonder — why don’t we tap we know about personality psychology and use it to make apps that are not just personalized, but capable of becoming our friends?
This brings up the question of what kind of relationships we build with the apps we’re currently using. And once these apps get smarter and develop more advanced communication skills, what kind of relationships might we have with AI bots in the future?
While this is a very open-ended philosophical question, I’ve attempted to boil it down to a concrete technical problem. But for that, I had to ask myself something deeper.
When does software become a friend?
Remember Tamagotchis? They were a digital pet that you had with you at all times. The digital pet had its own hunger and happiness meters, as well as a life cycle. They were designed in such a way that the user became emotionally invested in them. Tamagotchis are one of the best examples of people forming a relationship with what is essentially ones and zeroes. Software does have a precedent for engaging in “friendship.” We’ve tried to solve this problem before.
The reason I like science fiction movies may be very different from the reason someone else likes, say, sci-fi thrillers. Collaborative filtering can be a huge drawback when we want to generate recommendations that are absolutely accurate. But peer-to-peer recommendations?
Claire and I were both sci-fi fans, and I was honestly more likely to watch a movie suggested by her than I am those suggested by Netflix today. What did Claire get about my choices that mainstream personalization algorithms don’t? Well, as far as I can remember, we had discussed and debated many sci-fi movies for hours. She liked the Alien movies, I didn’t. I would even enjoy the softer class of sci-fi movies like Her and Limitless, she wouldn’t. We both liked superhero fiction. We both hated Fringe. Looking back, I realize that we had already modeled our mutual preferences in the best neural network we have: our brains. The models that we prepared for each other might even be better than the ones we had for ourselves. That is why the power of suggestion works so well in peer-to-peer recommendations.Your close friends sometimes do a better job of learning about you than you do yourself.
Learning by friendship
After thinking about this problem for some time I came to a possible solution. One of the things word vectors do really well is to take natural language text and build a vector form of each word. You can, in theory, add together all the words in the corpus and generate a vector representation of the whole corpus.
Does it sound confusing? Well, let me try to explain word vectors in simpler terms. You have three words (attack, defend, assail) in your vocabulary and you can form a lot of sentences with these three words. Attacking is like assailing. Attacking is the opposite of defending. To not assail is to defend, etc. Now when we generate word vectors using this information, we can see that attack and assail would fall closer to each other, while defend is further from them.
Word vectors are a very powerful way to do NLP with the help of neural nets. I can possibly build a personality test or even use something like Myers-Briggs to get a natural language text representing my own personality. I can even try to determine the same from the text messages and status updates that I have sent over the years on Facebook. Representing personality is the first known unknown (something that you know you don’t know) in the problem.
All I need is a similar vector representation of movies that the “friendship algorithm” can understand.
The friendship function
The second known unknown left here is the magical friendship function itself. What would friendship and understanding be in the eyes of a bot? Would it be a neural net trained on one person for all their personalization needs or a generic model for sci-fi movies that accepts personality as user input? To figure this out, we have to explore what friendship is in our own eyes. Is it empathy? Does it hinge on our need to be understood? Or is it both?
Friendship beyond personalization
Despite my attempts to code this and figure out a model, I remained at a loss. I really wanted to build this and present a working personalization model based on building human-like contextual relationships with the bot. I tried for a week and ended up making zero progress, which is kind of expected when I think about what I was trying to solve. Although I wasn’t able to create a model now, future researchers will likely be able to make it happen.
It is a magical time to be alive if you are working in AI. You are only bound by your imagination, creativity, and of course, the data you have at hand. While some ideas might sound preposterous and others sound dangerously difficult, I think there is some merit in trying to see how much weight you can lift in this intellectually stimulating gym class.
You and I are not just limited to the tools we use. In the age of artificial intelligence, anything that you imagine can be possible with the right datasets. Instead of introducing a master-slave dichotomy in our quest for artificial intelligence, we should be opting to make machines that empathize and befriend.
Shival Gupta, Product enthusiast and entrepreneur.
This story originally appeared on Hackernoon. Copyright 2017.