enfrdecontact@simsoft-industry.fr +33 (0)5 31 61 85 10 / 06.47.45.09.42

Zeitungsartikel WIRED.COM
zu einfühlsamen Chatbots

Presseschau

Der Artikel von WIRED.COM bezieht sich auf neue Entwicklungen der Facebook-Labors, die darauf abzielen, die Dialogerfahrung der Nutzer von Alexa durch das Hinzufügen von Einfühlungsvermögen in den Szenarien des Sprachassistenten zu verbessern.

Zwei Auszüge aus dem Artikel:

  1. Ein Gesprächsexperiment zwischen zwei Personen ist natürlich, wenn die beiden Gesprächspartner den gesamten Kontext des Austausches kennen. Zitat: Speaking as naturally as a person requires common-sense understanding of the world.
  2. Um dieses „Wissen“ zu erzeugen, müssen Sprachassistenten mit einer großen Datenmenge „trainiert“ werden. Zitat: Microsoft later showed that a similar approach could be applied to dialog; it released DialoGPT, an AI program trained on 147 million conversations from Reddit…

Anwendung auf den Industriebereich:

Um bei der Durchführung seiner Aufgaben und meist ohne visuelle Unterstützung unterstützt zu werden, muss der Techniker in der Lage sein, möglichst reibungslos mit seinem Assistenten zu interagieren. Die Nutzung der natürlichen Sprache und die Kenntnis des Fachvokabulars sind daher von grundlegender Bedeutung für ein erfolgreiches Benutzererlebnis mit einem Industriellen Vocal-Assistenten. Darüber hinaus ermöglicht die Kenntnis des „Arbeitsverhältnisses“ durch den Assistenten, in seinem Dialog mit dem Bedienungsmann präzise, relevant und wirksam zu sein.

Die Besonderheit der Industrie besteht darin, dass kein Unternehmen so viele Daten (147 Millionen) über diese Geschäftsprozesse mobilisieren kann, um einem Sprachassistenten beizubringen, in seinen Interaktionen mit einem Techniker intelligent zu sein. Es ist daher notwendig, andere Methoden zu verwenden, um einem Industriellen Vocal-Assistenten den Erwerb dieses Arbeitskontextes zu ermöglichen.

Diese Methoden werden in den von SIMSOFT INDUSTRY entwickelten Spix.SKILLS implementiert und ermöglichen es unseren Sprachassistenten, für alle Techniker in Ihrer Branche akzeptiert, verwendet und nützlich zu sein.

Artikel-Kommentar: WIRED.COM

A New Chatbot Tries a Little Artificial Empathy

A bot created by Facebook aims to make conversation with people more natural, though it also could enable better fakes.

Two Macaws Perched Face to Face on a Branch

Siri, Alexa, or Google Assistant can set a timer, play a song, or check the weather with ease, but for a real conversation you may as well try talking to the toaster.

Speaking as naturally as a person requires common-sense understanding of the world, knowledge of facts and current events, and the ability to read another person es feelings and character. It es no wonder machines aren’t all that talkativ.

A chatbot developed by artificial intelligence researchers at Facebook shows that combining a huge amount of training data with a little artificial empathy, personality, and general knowledge can go some way toward fostering the illusion of good chitchat.

The new chatbot, dubbed Blender, combines and builds on recent advances in AI and language from Facebook and others. It hints at the potential for voice assistants and auto-complete algorithms to become more garrulous and engaging – as well as a worrying moment when social media bots and AI trickery are more difficult to spot.

„Blender seems really good,“ says Shikib Mehri, a PhD student at Carnegie Mellon University focused on conversational AI systems review whome of the chatbot es conversations.

Snippets shared by Facebook show the bot chatting amiably with people online about everything from Game of Thrones to vegetarianism to what it es like to raise a child with autism. These examples are cherry picked; but in experiments, people judged transcripts of the chatbot es conversations to be more engaging than those of other bots, and sometimes as engaging as conversations between two humans.

Blender still gets tripped up by tricky questions and complex language and it struggles to hold the thread of a discussion for long. That es partly because it generates responses using statistical pattern matching rather than common sense or emotional understanding.

Other efforts to develop a more contextual understanding of language have shown recent progress, thanks to new methods for training machine-learning programs. Last year, the company OpenAI trained an algorithm to generate reams of often convincing text from a prompt. Microsoft later showed that a similar approach could be applied to dialog; it released DialoGPT, an AI program trained on 147 million conversations from Reddit. In January, Google revealed a chatbot called Meena that uses a similar approach to converse in a more naturally human way.

Facebook es Blender goes beyond these efforts. It es based on even more training data, also from Reddit, supplemented with training on other data sets, one that captures empathetic conversation, another tuned to different personalities, and a third that includes general knowledge. The finished chatbot blends together the learning from each of these data sets.

„Scale is not enough,“ says Emily Dinan, a research engineer at Facebook who helped create Blender. „You have to make sure you’re fine tuning to give your model the appropriate conversational skills like empathy, personality, and knowledge.“

The quest for conversational programs dates to the early days of AI. In a famous thought experiment, computer science pioneer Alan Turing set a goal for machine intelligence of fooling someone into thinking they are talking to a person. There is also a long history of chatbots fooling people. In 1966, Joseph Weizenbaum, a professor at MIT, developed ELIZA, a therapist chatbot that simply reformulated statements as questions. He was surprised to find volunteers thought the bot sufficiently real to divulge personal information.

More sophisticated language programs can, of course, also have a darker side. OpenAI declined to publicly release its text-generating program, fearing it would be used to churn out fake news. More advanced chatbots could similarly be used to make more convincing fake social media accounts, or to automate phishing campaigns.

The Facebook researchers considered not releasing Blender but decided the benefits outweighed the risks. Among other things, they believe other researchers can use it to develop countermeasures. Despite the advances, they say the program remains quite limited.

„We absolutely thought about the risks,“ says Stephen Roller, another Facebook research engineer. „Releasing these models enables other top research labs to expand upon this research“ and detect misuse. He says Blender is probably still too crude to fool anyone. „We haven’t solved dialog“ he says.

Zhou Yu, an assistant professor at UC Davis who specializes in AI and language, says recent advances have produced chatbots that seem more fluent. But they still can ‚t sustain a natural conversation for long. She says it es hard to assess how these systems would perform in the real world based on a research paper. „Every paper can show you some examples“ she says. „But I assume they are talking to some very cooperative users.“

Related Posts