Collaborative Human-Machine Interaction in Mobile Phone Support Centers: A Case Study

Citation

Dent, K. Collaborative Human-Machine Interaction in Mobile Phone Support Centers: A Case Study. Intelligent Human Systems Integration annual conference 2018.; Dubai, UAE. Date of Talk: 2018-01-07

Abstract

Advances in speech recognition, natural language understanding, and speech synthesis are enabling useful conversational agents. Technology users are becoming habituated to talking to things as the consumer market is exploding with products and personal assistants that allow voice interactions. The current slate of personal assistants can assist people by answering questions and helping to schedule appointments, for example. Moreover, there are task-based virtual agents that partially automate things like ordering food and making hotel reservations. However, these agents fall short of truly interactive, goal-based assistants that have been the focus of our recent research.

Our work has identified four types of human-agent interactions: 1. Question/answering 2. Transactional 3. Procedural 4. Diagnostic Question/answering interactions tend to be brief. Individuals ask simple fact-based questions and get answers. However, there is no support for information finding. Systems do not ask clarifying or disambiguating questions, and there is no help to narrow or otherwise make information seeking more precise. Transactional agents collect information from people and then execute a transaction on their behalf like reserving a hotel. These are generally modeled as frame or slot-filling exercises where a set of attributes is required to execute a transaction. While the first two interaction types are addressed by currently available assistants, they do not demonstrate true collaboration towards a shared goal. Although transactional interactions assist people in a task, the goals are predetermined, and the agent simply guides users to them. In both procedural and diagnostic interactions, humans and agents first establish a common goal and then work together to achieve it. Procedural interactions are initiated by a person seeking guidance to accomplish a task. An agent might simply provide some instructions much like simple question/answering, but in truly interactive procedural interactions, the agent understands the procedure as a sequential series of subtasks where each step may include pre-conditions to be satisfied before proceeding. If an obstacle is encountered, the agent should branch to another procedure to resolve it before returning to the original goal. The agent and human operate in tandem to achieve the goal. Diagnostic interactions are initiated when a person presents a problem. Neither the agent nor the individual knows the cause of the problem or how to fix it, but by soliciting symptoms and suggesting trial solutions the agent works collaboratively with the person to discover the root cause and find the correct resolution.

We recently worked on a project where we implemented some of our ideas for interactive and collaborative interactions in the form of a virtual agent for customer service centers handling support calls for mobile phones and cellular devices. The business objective was to automate as much of the work done by human support personnel as possible while maintaining a high level of customer satisfaction. Our agent, Otto, was implemented as a chat agent that handles synchronous textual conversations with customers. It responds to specification questions related to mobile devices (question/answering), explains how to perform tasks like configuring email (procedural) and helps people to troubleshoot problems with their phones (diagnosis).


Read more from SRI