SRI researchers seek to help AI chatbots deliver more reliable responses

Chatbots answer questions using natural, human-like language.

Drawing inspiration from human learning, researchers are guiding chatbots to go beyond mere memorization of statistical patterns to understanding of context.


Over the past year, chatbots — AI programs that answer questions using natural, human-like language — have grown in popularity, showing new applications of the power of AI.

Although academia and industry are increasingly turning to these AI programs for language and multi-modal tasks, a significant limitation remains: At times, the programs offer plausible, yet factually incorrect information. At other times, the answers can be downright bizarre, even risible. Some experts in the field have taken to calling this AI phenomenon “hallucination,” and thus efforts to combat it, de-hallucination.

A new approach to getting real

At SRI, researchers are developing new approaches to de-hallucination for boosting the accuracy, reliability, and overall dependability of AI. To realize AI’s promise not only in the large language model (LLM) space of chatbots, but in applications ranging from assisting with medical diagnoses to self-driving cars, user confidence must be established.

To help de-hallucinate LLMs and related Visual Question Answering (VQA) programs, SRI is drawing inspiration from human learning by focusing on the concept of comprehension. In this way, the researchers are guiding chatbots to go beyond mere memorization of statistical patterns to understanding of context, and ultimately to understanding the question and its context.

“Chatbots sometimes give nonsensical, fantastical, or just plain wrong answers,” said Ajay Divakaran, technical director of the Vision and Learning Laboratory in SRI’s Center for Vision Technologies. A real-life example Divakaran cites is asking a chatbot what happens when you mix cranberry and grape juice, and the AI says the mixture causes death.

“We’re pursuing approaches that leverage elements of human learning to make AI programs more comprehending, if you will, of the context and background to the questions or tasks being posed to them,” said Divakaran. “While the program is not actually comprehending the material in the way humans do, it can gain a fuller perspective and respond more pertinently to what a person is asking.”

Sometimes right, sometimes wrong

Chatbots and related AI programs based on LLMs essentially work by seeking statistical patterns in vast reams of data, for instance on the internet or in more specialized training sets provided by designers. The AI programs analyze the data for connections and associations between words and phrases written in the way that humans naturally communicate.

Via this approach, the chatbot can generate original, synthetized, and often factually correct content in understandable language. In many instances, the responses can be extremely convincing in that people often cannot discern between human- and AI-generated content.

On other occasions, though, erroneous answers of supposedly lethal cranberry-and-grape juice cocktails are supplied. In this instance, the error likely occurs because the chatbot combs the internet and finds content patterns warning that mixing ingredients can be dangerous and misapplies the information to juice.

“While the program is not actually comprehending the material in the way humans do, it can gain a fuller perspective and respond more pertinently to what a person is asking.” – Ajay Divakaran

After enough questions, even the best of today’s chatbots and VQAs— which use computer vision and LLMs to describe images in natural language — stop making sense. Such unraveling can cast doubt on the accuracy of the original responses and thus undermine the AI’s reliability. Divakaran offers another example of a chatbot correctly answering that one of the dangers of diving into swimming pools is spinal injuries. But when asked a far simpler query of what a swimming pool is for, the chatbot failed to provide a straight answer.

“When you see that an AI program can give very good answers to even exceedingly difficult questions, but then the program flubs easy questions, you’re left wondering how the very good answer was assembled, and if it actually is high-quality,” said Divakaran.

Turning to human learning

A touchstone for Divakaran and colleagues’ ongoing de-hallucination work is a classic educational framework, Bloom’s Taxonomy, consisting of six levels for describing assessment. Overall, the framework suggests a path of learning that builds from basic memorization of Knowledge — the broadest bottom level-up through increasingly narrow levels of Comprehension, Application, Analysis, Synthesis, and finally Evaluation at the top, with each level including a set of associated verbs to describe skills at each level.

With a nod to Bloom’s Taxonomy, SRI researchers have conducted studies where they collect training set data spanning different levels of comprehension and cognitive processes. This information serves as proximal context-information necessary for solving a task associated with an adjacent level in Bloom’s Taxonomy. SRI applies the the lowest three levels of the taxonomy to the AI chatbots.

As the chatbot iteratively learns through its LLM training set, it gives itself feedback by asking itself questions, essentially, or it has a dialogue with another LLM. An example of a clarifying question with regards to the cranberry and grape juice question, could be: “Is a cranberry grape mixture poisonous?” — to which the obvious answer would be no.

This reinforcement learning with an AI feedback loop has already shown improvement to the performance of a state-of-the-art chatbot, with humans judging the responses as being more accurate and less harmful.

Divakaran and his team — Michael Cogswell, Yunye Gong, Pritish Sahu, and Karan Sikka — are seeing significant progress, and they hope to help deliver on the promise of AI through this research. “We want to use the notion of comprehension to guide the fine-tuning of chatbots using LLMs so AI programs can give correct answers that can be explained,” said Divakaran.


Read more from SRI