“Systems that ‘understand’… but not by themselves, designed and supervised through carefully crafted automation.”
This article considers the manufacture of dialogue automation systems for marketing and customer relations. Since 2016, brands and social networks have been consistently showcasing devices for online language interactions such as chatbots and virtual assistants.
In principle, these conversational robots are programs that allow humans to interact using ordinary language via an automatic chat interface. These “robots” are particularly popular in marketing and customer relationships. They are seen as a way to manage high volumes of conversation and to remain in constant contact with customers.
But behind these systems, sometimes referred to as examples of “artificial intelligence”, is a hybrid method of production combining automation and human control. Indeed, chatbots are devices that require a phase of knowledge expansion and then continuous supervision of the system. With this in mind, we examine how this human work is organised around algorithmic design activity.
The first machine that could interact in natural language appeared publicly in 1966: Joseph Weizenbaum, a German-American computer scientist, published an article on ELIZA (Weizenbaum, 1966), a program that could converse with people. Based on a reformulation principle, this program was inspired by how a psychotherapist might respond when listening to his patients. This model enabled the development of a robot whose very limited knowledge does not hinder the discussion, since it focuses only on what the human interlocutor says. In his article, Joseph Weizenbaum showed that a machine could, through its interaction management, be perceived as having an understanding similar to that of a psychotherapist, and despite strong criticism of his simplistic keyword recognition system, ELIZA remains known as a founding example of conversational programs. Fifty years later, in June 2016, Facebook launched a development interface on Messenger for chatbot designers, swiftly followed by other social networks and mobile applications such as Skype, Telegram and Slack… thus the chatbot movement was launched.
And with good reason: new infrastructure resources, such as cloud Internet solutions, mega-data and the deployment of learning algorithms, are possibilities for calculating and storing data that did not exist a few decades ago. These digital “robots”, programmed to answer simple questions instantly, now appear on brand websites, but also on social networks and some instant messaging applications. Like ELIZA, “chatbots” are developed for use in many service areas, bringing concerns about the possible substitution of humans by machines in interpersonal professions into public debate. However, the configuration of activities around chatbot design shows, on the contrary, a significant human presence behind these devices. Instead, we query the nature of these human activities, which are intrinsic to the functioning of these devices.
To explore this topic, we draw on a survey conducted with Orange between February and July 2018: this survey includes seven interviews with professionals (developers and designers), as well as an ethnography of a chatbot project.
In concrete terms, developing dialogue automation programs requires building operative conversation models; that is, a model of correspondence between the human’s turn to speak and the machine’s turn to “speak”, a way of linking the exchange between users and the program while ensuring consistency, in particular by user intent detection. However, unlike automatic learning algorithms, the natural language automatic processing algorithms used in chatbot management interfaces do not evolve autonomously, meaning that they cannot recognise new terms on their own.
Since there is no system to date capable of generating its own knowledge, it is necessary to continually inject new knowledge into the system so that it can understand any unexpected vocabulary or intent. In the absence of full automation of the process, these knowledge bases are entered manually, that is, imagined and scripted by humans: “self-service” dialogue must therefore first be anticipated and written, like a theatre dialogue.
The design of conversational robots therefore requires scriptwriting and dialogue anticipation work, upstream of the natural language processing.
Thus, these devices promise to interact in ordinary language; but they can do so only if they deal with very limited subjects. This invisible condition means that not every request can be fulfilled, since each dialogue must be categorised and written in advance to be operational.
Carefully crafted automation
Writing is therefore not just initial development work, but is a key activity in the operation of the system. This mesh between the “craft” of creation and of supervision on the one hand, and automation of the device on the other, raises the question of the production scale; if full automation of the system is difficult to envisage, to what extent can we talk of mass production? This question arises more broadly for more general-purpose software solutions that incorporate learning algorithms (often referred to as “artificial intelligence” or “assistants”) that are intended for use in any information-processing field (for example, in the medical or legal fields).
“The first problem is that AI is hard to mass produce. The machine must be specially trained for each new application. (…) For the customer, choosing this technical solution therefore means investing time and money. (…) Too few solutions designed by this software company can be sold on a large scale to customers. Instead, it looks like IBM needs to invent the wheel for every customer and every problem.” Tom Austin, analyst at Gartner, 2018.
The problem here attributed to “Watson”, produced by IBM, is in fact common to all dialogue automation solutions available on the market. One paradox of “artificial intelligence” thus lies in its low autonomy — even though these algorithms are trained to make decisions, and they can do so, they require, first and foremost, the transfer of human knowledge (translation, modelling and transfer of basic knowledge), knowledge that requires several data processing operations in advance. Developed to carry out “logical reasoning”, these systems lack specific knowledge: the manual “learning” work is therefore unique to each sector of activity. In the end, conversational robots are not very autonomous.
Systems that “understand”… but not by themselves
Thus, the elaboration of the dialogues passes through different levels of design: first, the scriptwriting of interactions by service theme, then the continuous expansion of the tool’s knowledge bases, in order to recognise a diverse range of terms as required. Finally, design work is carried out to streamline and facilitate the language interactions between chatbot and user. These different design steps show that dialogue automation is a multidimensional writing process. This multitude of scriptural dimensions intertwine as distinct and complementary activities:
“the interesting thing about this type of project is that since everyone is discovering the subject, everyone pitches in. Everyone really has worked on all aspects. I have not just been involved in design. I have done some path writing and intent management.” Dominique, chatbot assistance, 2018.
Training the system, which requires significant human work, involves all team members, who also contribute to work in areas outside their usual expertise. Beyond specific design skills, more ordinary understanding skills, like reCAPTCHA training, are another key resource: designers need to be able to spot flaws in the intent detection, answers that need refining, syntactically incorrect sentences, etc. This continuous monitoring of the system allows designers to compensate for the lack of flexibility in automatic dialogues.
One of Orange’s chatbot projects faces another unexpected problem: the provider’s technical system is not accessible to non-developers, and is not as automatic as hoped.
“We realised that it was much more complicated than expected; the “AI” system we use is not a magic trick that does everything on its own. The provider company initially claimed that their artificial intelligence system was self-learning. That is not true! (…) It actually needed to be taught how to learn. So maintaining Artificial Intelligence involves many more human interventions than is believed.” Dominique, chatbot assistance, 2018.
The use of the natural language recognition tool has therefore led Orange to formulate new business skills adapted to the technical solution, such as skills for managing dialogue exchanges, represented by “trees”.
This means that these conversational robots have limited autonomy and require significant human activity: creation, of course, but also supervision and training. This high dependence is reflected in the emergence of various intermediate actors, embodied in mediating roles bridging the gap between the technical system, design and interaction analyses. These mediating agents must then mobilise a range of skills beyond their own activities, which leads to rethinking the roles needed to design these interfaces. Designers are also required to mobilise their own language and communication experience to develop conversational frameworks.
What accompanies the development of the device is therefore not only technical skills, but also a transfer of ordinary and cross-cutting skills, such as the construction of a dictionary of synonyms, the understanding of miswritten words, or the reformulation of poorly understood answers by users… which the system does not integrate.
The automation of natural language dialogues is developed through a hybrid method combining the “cognitive” and social skills of the designers, and the computational skills of the software: the automatic dialogue with a brand is thus scripted by different professional profiles (marketing, technical and design). Chatbots are therefore hybrid devices whose learning is an important part of the development process, as it is a moment of transition and expansion of the system. The more knowledge is expanded, the more the script for uses is expanded, the more the system is able to respond to interlocutors on various topics. This manual learning work, however, takes the most time since it extends from the initial design to the continuous supervision of the bot, and also includes activities for readjusting interactions; it is both a matter of memorising variations of expression in ordinary language and of dealing with new intents (new needs or new themes of dialogue). While chatbots are “open” systems in terms of their possibilities for continuous evolution, the functionality of these devices is still dependent on careful human supervision, regardless of its frequency. The future of chatbots as industrial devices for automating dialogues therefore depends, in part, on the future possibilities for adjusting this human work of reframing chatbot “understanding”, which is to date an essential component for the operation of conversational agent systems.
- Bernard, S., 2014, “Le travail de l’interaction. Caissières et clients face à l’automatisation des caisses“, Sociétés contemporaines (N° 94), p93-119
- Denis, J., 2018, Le travail invisible des données, Presse des Mines
- Velkovska, J., Beaudouin, V., 2014, p. 97-128 “Parler aux machines, coproduire un service. Artificial intelligence and customer work in automated voice services“, in Kessous, E., Mallard, A., (dir.), La fabrique de la vente, Le travail commercial dans les télécommunications, Presses des mines, Paris
- Weinzenbaum J., 1966, ELIZA–A Computer Program For the Study of Natural Language Communication Between Man and Machine, Communications of the ACM