enfrdecontact@spix-industry.com +33 (0)5 31 61 85 10

WIRED.COM press article
to empathetic chatbots

Press review

The WIRED.COM article refers to new developments in Facebook labs that aim to improve the dialogue experience of Alexa users by adding empathy to voice assistant scenarios.

Two excerpts from the article:

  1. An experience of dialogue appears natural between two people if the two interlocutors have knowledge of the whole context of their exchange. Quote: Speaking as naturally as a person requires common-sense understanding of the world.
  2. To generate this “knowledge,” voice assistants must be “trained” with a large amount of data. Quote: Microsoft later showed that a similar approach could be applied to dialog; it released DialoGPT, an AI program trained on 147 million conversations from Reddit…

Application to the industrial sector:

To be helped in the accomplishment of his tasks and most often without visual support, the technician must be able to interact with his Assistant as smoothly as possible. The exploitation of natural language and knowledge of business vocabulary are therefore fundamental to a successful user experience with an Industrial Vocal Assistant. Moreover, the assistant’s knowledge of the “work environment” allows him to be precise, relevant and effective in his dialogue with the operator.

The specificity of the industry is that no company can mobilize so much data (147 Million) on these business processes to teach a voice assistant to be intelligent in his interactions with a technician. With no more than a few hundred or even thousands of samples available, it is therefore necessary to use other methods to enable an Industrial Vocal Assistant to acquire this work environment.

These methods are implemented in the Spix.SKILLS developed by SIMSOFT INDUSTRY and allow our voice assistants to be accepted, used and useful to all technicians in your industry.

Article comment: WIRED.COM

A New Chatbot Tries a Little Artificial Empathy

A bot created by Facebook aims to make conversation with people more natural, though it also could enable better fakes.

Two Macaws Perched Face to Face on a Branch

Siri, Alexa, or Google Assistant can set a timer, play a song, or check the weather with ease, but for a real conversation you may as well try talking to the toaster.

Speaking as naturally as a person requires common-sense understanding of the world, knowledge of facts and current events, and the ability to read another person’s feelings and character. It’s no wonder machines aren’t all that talkative.

A chatbot developed by artificial intelligence researchers at Facebook shows that combining a huge amount of training data with a little artificial empathy, personality, and general knowledge can go some way toward fostering the illusion of good chitchat.

The new chatbot, dubbed Blender, combines and builds on recent advances in AI and language from Facebook and others. It hints at the potential for voice assistants and auto-complete algorithms to become more garrulous and engaging—as well as a worrying moment when social media bots and AI trickery are more difficult to spot.

“Blender seems really good,” says Shikib Mehri,a PhD student at Carnegie Mellon University focused on conversational AI systems that reviewed some of the chatbot’s conversations.

Snippets shared by Facebook show the bot chatting amiably with people online about everything from Game of Thrones to vegetarianism to what it’s like to raise a child with autism. These examples are cherry picked; But in experiments, people judged transcripts of the chatbot’s conversations to be more engaging than those of other bots, and sometimes as engaging as conversations between two humans.

Blender still gets tripped up by tricky questions and complex language and it struggles to hold the thread of a discussion for long. That’s partly because it generates responses using statistical pattern matching rather than common sense or emotional understanding.

Other efforts to develop a more contextual understanding of language have shown recent progress, thanks to new methods for training machine-learning programs. Last year, the company OpenAI trained an algorithm to generate reams of often convincing text from a prompt. Microsoft later showed that a similar approach could be applied to dialog; it released DialoGPT, an AI program trained on 147 million conversations from Reddit. In January, Google revealed a chatbot called Meena that uses a similar approach to converse in a more naturally human way.

Facebook’s Blender goes beyond these efforts. It’s based on even more training data, also from Reddit, supplemented with training on other data sets, one that captures empathetic conversation, another tuned to different personalities, and a third that includes general knowledge. The finished chatbot blends together the learning from each of these data sets.

“Scale is not enough,” says Emily Dinan,a research engineer at Facebook who helped create Blender. “You have to make sure you’re fine tuning to give your model the appropriate conversational skills like empathy, personality, and knowledge.”

The quest for conversational programs dates to the early days of AI. In a famous thought experiment,computer science pioneer Alan Turing set a goal for machine intelligence of fooling someone into thinking they are talking to a person. There is also a long history of chatbots fooling people. In 1966, Joseph Weizenbaum, a professor at MIT, developed ELIZA, a therapist chatbot that simply reformulated statements as questions. He was surprised to find volunteers thought the bot sufficiently real to disclose personal information.

More sophisticated language programs can, of course, also have a darker side. OpenAI declined to publicly release its text-generating program, fearing it would be used to churn out fake news. More advanced chatbots could similarly be used to make more convincing fake social media accounts, or to automate phishing campaigns.

The Facebook researchers considered not releasing Blender but decided the benefits outweighed the risks. Among other things, they believe other researchers can use it to develop countermeasures. Despite the advances, they say the program remains quite limited.

“We absolutely thought about the risks,” says Stephen Roller,another Facebook research engineer. “Releasing these models enables other top research labs to expand upon this research” and detect misuse. He says Blender is probably still too crude to fool anyone. “We haven’t solved dialog,” he says.

Zhou Yu,an assistant professor at UC Davis who specializes in AI and language, says recent advances have produced chatbots that seem more fluent. But they still can’t sustain a natural conversation for long. She says it’s hard to assess how these systems would perform in the real world based on a research paper. “Every paper can show you some examples,” she says. “But I assume they are talking to some very cooperative users.”

Related Posts