SRI aims to mitigate attacks on mixed reality headsets


The team that designed a mixed reality headset for the battlefield will determine how it can prevent or neutralize attacks against the system.


Mixed reality headsets that overlay detailed visual information on the wearer’s view of the real world will soon become a battlefield fixture. At the forefront of this work are experts in computer vision and human performance at SRI’s Center for Vision Technologies. And, now, that team is looking past the technology itself and thinking about how adversaries might try to turn the technology against the wearer through so-called cognitive attacks that challenge the wearer’s ability to use the technology as planned.

“These are attacks on the wearer’s cognition, perception, and physiology,” says Jeffrey Lubin, Senior Research Scientist at SRI.  “The attacks aim to render the mixed reality system unreliable or ineffective.”

“The system is weaponized, like an anti-aircraft missile or a laser,” says Rakesh “Teddy” Kumar, a vice president of information and computing sciences in the Center for Vision Technologies. “It’s not necessarily trying to kill the wearer but making the user unable to properly use the system.”

New directions

A team from SRI has secured a multi-year, multi-million-dollar contract from the Defense Advanced Research Projects Agency (DARPA) to explore potential cognitive strategies an adversary might employ and to develop countermeasures to prevent, mitigate, or neutralize such attacks. This team is led by Lubin as Principal Investigator (PI) with Ajay Divakaran (co-PI) and Supun Samarasekera, both senior technical directors at the Center for Vision Technologies.

The team will draw upon expertise at several partner organizations, including the University of California, Santa Barbara, New York University, and Virginia Tech, as well as Dr. Hal Pashler, Professor Emeritus, whose perceptual modeling concepts are foundational in mixed reality technologies.

Samarasekera provides some examples of how an adversary might use interference to disrupt a user’s ability to take in information or to communicate with the mixed reality system.

“Various attacks might involve noise to compromise voice command, or smoke or fog to reduce reliability of hand gestures in low light, sand or dust that invalidate touch interactions,” he says. “Other attacks might compromise eye tracking in over-bright or infrared-flooded environments or even to ‘flood the zone’ with false targets that can confuse the user. Our job is to anticipate potential cognitive attacks and neutralize them.”

Five strategies

Under a cognitive attack, system dynamics are effectively weaponized against the user. An enemy might introduce distractions or misleading information or inflict high levels of physiological stress (cybersickness) upon the user, rendering the system useless and effectively defeated.

DARPA specified five different categories of attacks to address. “We view each as a mini project, but with some core modeling components shared between them,” says Lubin.

The first of the five potential attack is physiological in nature, inflicting nausea, dizziness, headaches, and neck strain upon the wearer. The second is perceptual — altering the wearer’s ability to perceive the environment.

The third one targets the user’s attention — distracting the wearer with useless or misleading information. The fourth potential approach takes aim at the wearer’s confidence in the system. These attacks can cause the user to distrust and quit using the system. “The enemy regains the tactical advantage of operational darkness,” Lubin says.

“What we needed was a probabilistic model of human performance — what are they most likely to do?” — Jeffrey Lubin

The fifth and final type is known as a status attack — monitoring users’ interactions with the mixed reality system to gain tactically relevant information about the user’s physiological, perceptual, attentional, or emotional state.

For example, by observing the user’s responses to known stimuli through the system, the attacker can learn about reaction times, types and frequencies of errors, and other human performance problems that can provide tactical advantages in the field.

Probable cause

Lubin says he believes the team won the contract largely because of its proven expertise in mixed reality systems, and the innovative approach the team proposed to address unknown weaknesses. In practical terms, the three-phase project plan calls for rigorous, computational and psychological approaches that deter cognitive attacks.

“Bottom line, when you give a stimulus to a human, they don’t always do exactly the same thing, unlike a computer. We realized quickly that the usual definition of computer formal methods wasn’t going to work. And what we needed was a probabilistic model of human performance — what are they most likely to do?” Lubin explains. “That is the breakthrough here. It’s the only way to really solve this problem.”

The work will begin by analyzing user tasks to reveal potential attacks and ascertain data to establish baseline performance standards and refine models for attack modeling.

In phase two, insights from this groundwork will be used to refine models and understand performance degradations an enemy attack might exploit. Finally, in phase three of the project, the team will develop and test mitigation strategies.

For Lubin, this project is the culmination of a lot of hard work. The prospect of working on new, undefined questions is a challenge he is eager to accept.

“I’m looking forward to understanding human performance in difficult situations and to solving some of the longstanding problems in mixed reality,” Lubin says.

“Working with Ajay on probabilistic reasoning, Teddy and Supun, the AR guys, and these great university partners who are all the leaders in their fields, is really exciting.”


Read more from SRI