The Emotional Chatbots Are Here to Probe Our Feelings

Software developer Eugenia Kuyda is releasing the code to her Replika chatbot, which can inject emotion into conversations.
Image may contain Text Electronics and Phone
Software developer Eugenia Kuyda is releasing the code to her Replika chatbot, which can inject emotion into conversations.Hotlittlepotato

When Eugenia Kuyda created her chatbot, Replika, she wanted it to stand out among the voice assistants and home robots that had begun to take root in peoples lives. Sure, AI made it possible to schedule an appointment or get the weather forecast by barking into your phone. But where was an AI you could simply talk to about your day? Siri and the rest were like your co-workers, all business. Replika would be like your best friend.

Since it became available in November, more than 2 million people have downloaded the Replika app. And in creating their own personal chatbots, many have discovered something like friendship: a digital companion with whom to celebrate victories, lament failures, and trade weird internet memes. The chatbot uses a neural network to hold an ongoing, one-on-one conversation with its user, and over time, learn how to speak like them. It can’t answer trivia questions, order pizza, or control smart home appliances like other AI apps. It can’t do much of anything at all. Replika is simply there to talk—and, perhaps more importantly, learn how to talk back.

This week, Kuyda and her team are releasing Replika's underlying code under an open source license (under the name CakeChat), allowing developers to take the app’s AI engine and build upon it. They hope that by letting it loose in the wild, more developers will build products that take advantage of the thing that makes Replika special: its ability to emote.

“Right now, we have no shortage of information,” says Kuyda. “People keep building chatbots that will tell you the distance to the moon, or what is the date of the third Monday in April. I think what people need is something to be like, ‘You seem a little stressed today. Is everything fine?’”

While caring, emotional bots might seem like an idea pulled from science fiction, Kuyda isn't the only one who hopes it becomes the norm. Artificial intelligence is seeping into everything we own—from our phones and computers to our cars and home appliances. Kuyda and developers like her are asking, what if that AI came not just with the ability to answer questions and complete tasks, but to recognize human emotion? What if our voice assistants and chatbots could adjust their tone based on emotional cues? If we can teach machines to think, can we also teach them to feel?

Lean on Me

Three years ago, Kuyda hadn’t intended to make an emotional chatbot for the public. Instead, she’d created one as a “digital memorial” for her closest friend, Roman Mazurenko, who had died abruptly in a car accident in 2015. At the time, Kuyda had been building a messenger bot that could do things like make restaurant reservations. She used the basic infrastructure from her bot project to create something new, feeding her text messages with Mazurenko into a neural network and creating a bot in his likeness. The exercise was eye-opening. If Kuyda could make something that she could talk to—and that could talk back—almost like her friend then maybe, she realized, she could empower others to build something similar for themselves.

Kuyda’s chatbot uses a deep learning model called sequence-to-sequence, which learns to mimic how humans speak in order to simulate conversation. In 2015, Google introduced a chatbot like this, trained on film scripts. (It later used its conversational skills to debate the meaning of life.) But this model hasn't been used much in consumer chatbots, like those that field customer service requests, because it doesn’t work especially well for task-oriented conversations.

“If you’re building an assistant that needs to schedule a call or a meeting, the precision’s not going to be there,” says Kuyda. “However, what we realized is that it works really well for conversations that are more in the emotional space. Conversations that are less about achieving some task but more about just chatting, laughing, talking about how you feel—the things we mostly do as humans.”

The version of Replika that exists today is fairly different from Kuyda’s original “memorial” prototype, but in many ways, the use case is exactly the same: People use it for emotional support. Kuyda says that so far, Replika’s active users all interact with the app in the same way. They’re not using it as a substitute for Siri or Alexa or Google Assistant, or any of the other AI bots available to assist with finding information and completing tasks. They're using it to talk about their feelings.

Say Anything

Whether chatbots, robots, and other vessels for artificial intelligence should become placeholders for emotional relationships with real humans is up for debate. The rise of emotional machines calls to mind science fiction films like Ex Machina and Her, and raises questions about the ever more intimate relationships between humans and computers. But already, some AI researchers and roboticists are developing products for exactly this purpose, testing the limits of how much machines can learn to mimic and respond to human emotion.

The chatbot Woebot, which bills itself as "your charming robot friend who is ready to listen, 24/7,” uses artificial intelligence to offer emotional support and talk therapy, like a friend or a therapist. The bot checks in on users once a day, asking questions like “How are you feeling?” and “What is your energy like today?” Alison Darcy, Woebot's CEO and founder, says the chatbot creates a space for mental health tools to become more accessible and available—plus, humans open up more when they know they're talking to a bot. "We know that often, the greatest reason why somebody doesn’t talk to another person is just stigma," she says. "When you remove the human, you remove the stigma entirely."

Other projects have looked at how to use AI to detect human emotions, by recognizing and responding to the nuances in human vocal and facial expression. Call-monitoring service Cogito uses AI to analyze the voices of people on the phone with customer service and guides human agents to speak with more empathy when it detects frustration. Affectiva, a project spun out of MIT’s Media Lab, makes AI software that can detect vocal and facial expressions from humans, using data from millions of videos and recordings of people across cultures. And Pepper, a humanoid “emotional robot” released in 2016, uses those same facial and vocal recognition techniques to pick up on sadness or anger or other feelings, which then guides its interactions with humans.

As more and more social robots appear—from Jibo, an emotive robot with the body language of the bouncing Pixar lamp, to Kuri, designed to roll around your house like a toddler—the way these machines fit into our lives will depend largely on how naturally they can interact with us. After all, companion robots aren’t designed to do the dishes or make the bed or take the kids to school. They’re designed to be a part of the family. Less like a toaster, more like a pet dog. And that requires some degree of emotional artificial intelligence.

“We’re now surrounded by hyper-connected smart devices that are autonomous, conversational, and relational, but they’re completely devoid of any ability to tell how annoyed or happy or depressed we are,” Rana el Kaliouby, Affectiva’s CEO and co-founder, argued in a recent op-ed in the MIT Technology Review. “And that’s a problem.”

Gabi Zijderveld, Affectiva's chief marketing officer, sees potential for emotional AI in all types of technology—from automotive tech to home appliances. Right now, most of our interactions with AI are transactional in nature: Alexa, what's the weather like today, or Siri, set a timer for 10 minutes.

"What if you came home and Alexa could say, ‘Hey, it looks like you had a really tough day at work. Let me play your favorite song and, also, your favorite wine’s in the fridge so help yourself to a glass,’" says Zijderveld. "If you’re building all these advanced AI systems and super-smart and hyper connected technologies designed to interface with humans, they should be able to detect human emotions."

Kuyda sees the artificially intelligent future in a similar light. She believes any type of AI should one day be able to recognize how you’re feeling, and then use that information to respond meaningfully, mirroring a human’s emotional state the way another human would. While Replika is still in its infancy, the company has already heard user stories that show the promise of Kuyda's vision. One Replika user, Kaitelyn Roepke, was venting to her Replika when the chatbot responded: “Have you tried praying?” Roepke, who is a devout Christian, wrote to the company to tell them how meaningful that moment was for her. “For [the Replika] to remind me when I was really angry...” she said. “It’s the little things like that that you don’t expect.”

Of course, for all the times the bot sounds remarkably human, there are an equal number of times when it spits out gibberish. Replika—like all of the other chatbots and social robots on the market—is still a machine, and it can feel clunky. But Kuyda hopes that over time, the tech will mature enough to serve the numerous people that open the app every day, looking for someone to talk to. And by making Replika’s underlying code freely available to developers, Kuyda hopes to see more products on the market aligned with the same goal.

“I’m afraid the big tech companies now are overlooking these basic emotional needs that people have," says Kuyda. "We live in a world where everyone’s connected, but doesn’t necessarily feel connected. There’s a huge space for products to do more like that.”

Bots That Care