WIRED.COM press article
towards empathetic chatbots


The WIRED.COM article refers to new developments in Facebook labs that aim to improve the dialogue experience of Alexa users by adding empathy in the voice assistant scenarios.


Two extracts from the article :

  1. An experience of dialogue seems natural between two people if the two interlocutors have a knowledge of the whole of their exchange context. Citation: Speaking as naturally as a person requires common-sense understanding of the world.
  2. To get to generate this « knowledge, voice assistants must are « trained » with a large amount of data. Citation: Microsoft later showed that a similar approach could be applied to dialog; it released DialoGPT, an AI program trained on 147 million conversations from Reddit…


Application to the industrial field:

To be helped in carrying out their tasks and most often without visual support, the technician must be able to interact with his Assistant in the most fluid way possible. The exploitation of natural language and knowledge of the business vocabulary are therefore fundamental for a successful user experience with an Industrial Vocal Assistant. otherwise, knowledge of « work context » by the assistant allows him to be precise, relevant and effective in its dialogue with the operator.

The specificity of the Industry is that no company can mobilize as much data (147 Millions) on these business processes to teach a voice assistant to be intelligent in their interactions with a technician. With at most a few hundred or even thousands of samples available, it is therefore necessary to use other methods to allow an Industrial Vocal Assistant to acquire this working context.

These methods are implemented in Spix.SKILLS developed by SIMSOFT INDUSTRY and allow our voice assistants to be accepted, used and useful to all technicians in your industry.

Article comment: WIRED.COM

A New Chatbot Tries a Little Artificial Empathy

A bot created by Facebook aims to make conversation with people more natural, though it also could enable better fakes.

Two Macaws Perched Face to Face on a Branch

Siri, Alexa, or Google Assistant can set a timer, play a song, or check the weather with ease, but for a real conversation you may as well try talking to the toaster.

Speaking as naturally as a person requires common-sense understanding of the world, knowledge of facts and current events, and the ability to read another person’s feelings and character. It’s no wonder machines aren’t all that talkative.

A chatbot developed byartificial intelligence researchers atFacebook shows that combining a huge amount of training data with a little artificial empathy, personality, and general knowledge can go some way toward fostering the illusion of good chitchat.


The new chatbot, dubbed Blender, combines and builds on recent advances in AI and language from Facebook and others. It hints at the potential for voice assistants and auto-complete algorithms to become more garrulous and engaging—as well as a worrying moment when social media bots and AI trickery are more difficult to spot.

“Blender seems really good,” saysShikib Mehri, a PhD student at Carnegie Mellon University focused on conversational AI systems who reviewed some of the chatbot’s conversations.

Snippets shared by Facebook show the bot chatting amiably with people online about everything fromGame of Thrones to vegetarianism to what it’s like to raise a child with autism. These examples are cherry picked; but in experiments, people judged transcripts of the chatbot’s conversations to be more engaging than those of other bots, and sometimes as engaging as conversations between two humans.


Blender still gets tripped up by tricky questions and complex language and it struggles to hold the thread of a discussion for long. That’s partly because it generates responses using statistical pattern matching rather than common sense or emotional understanding.


Other efforts to develop a more contextual understanding of language have shown recent progress, thanks to new methods for trainingmachine-learning programs. Last year, the company OpenAItrained an algorithm to generate reams of often convincing text from a prompt. Microsoft later showed that a similar approach could be applied to dialog; itreleased DialoGPT, an AI program trained on 147 million conversations from Reddit. In January, Google revealed a chatbot called Meena that uses a similar approach to converse in a more naturally human way.

Facebook’s Blender goes beyond these efforts. It’s based on even more training data, also from Reddit, supplemented with training on other data sets, one that captures empathetic conversation, another tuned to different personalities, and a third that includes general knowledge. The finished chatbot blends together the learning from each of these data sets.

“Scale is not enough,” saysEmily Dinan, a research engineer at Facebook who helped create Blender. “You have to make sure you’re fine tuning to give your model the appropriate conversational skills like empathy, personality, and knowledge.”

The quest for conversational programs dates to the early days of AI. In a famousthought experiment, computer science pioneerAlan Turing set a goal for machine intelligence of fooling someone into thinking they are talking to a person. There is also a long history of chatbots fooling people. In 1966, Joseph Weizenbaum, a professor at MIT, developed ELIZA, a therapist chatbot that simply reformulated statements as questions. He was surprised to find volunteers thought the bot sufficiently real to divulge personal information.

More sophisticated language programs can, of course, also have a darker side. OpenAI declined to publicly release its text-generating program, fearing it would beused to churn out fake news. More advanced chatbots could similarly be used to make more convincing fake social media accounts, or to automate phishing campaigns.

The Facebook researchers considered not releasing Blender but decided the benefits outweighed the risks. Among other things, they believe other researchers can use it to develop countermeasures. Despite the advances, they say the program remains quite limited.

“We absolutely thought about the risks,” saysStephen Roller, another Facebook research engineer. “Releasing these models enables other top research labs to expand upon this research” and detect misuse. He says Blender is probably still too crude to fool anyone. “We haven’t solved dialog,” he says.

Zhou Yu, an assistant professor at UC Davis who specializes in AI and language, says recent advances have produced chatbots that seem more fluent. But they still can’t sustain a natural conversation for long. She says it’s hard to assess how these systems would perform in the real world based on a research paper. “Every paper can show you some examples,” she says. “But I assume they are talking to some very cooperative users.”

Related Posts