Technical Director, Artificial Intelligence Center
Melinda Gervasio, Ph.D., is a Technical Director in SRI’s Artificial Intelligence Center. She focuses on developing technology that combines human and machine intelligence.
Gervasio’s research interests include intelligent assistants, machine learning for autonomous agents, adaptive personalization, interactive machine learning, end-user programming, and intelligent training systems. As the technical lead on a number of government-funded projects, she has developed technologies for learning from demonstration, adaptive assistance, proactive decision support, and recommendation for informal learning.
Before joining SRI, Gervasio co-founded MindShadow, where she played a key technical role in developing a software platform for personalized, content-based recommendation. She was also a research scientist at the Institute for the Study of Learning and Expertise, where she worked on adaptive personalization.
Gervasio has a Ph.D. in computer science from the University of Illinois at Urbana-Champaign and a B.S. in computer science from the University of the Philippines Diliman.
Recent publications
-
IxDRL: A Novel Explainable Deep Reinforcement Learning Toolkit based on Analyses of Interestingness
Our tool provides various measures of RL agent competence stemming from interestingness analysis and is applicable to a wide range of RL algorithms, natively supporting the popular RLLib toolkit.
-
Global and Local Analysis of Interestingness for Competency-Aware Deep Reinforcement Learning
Our new framework provides various measures of RL agent competence stemming from interestingness analysis and is applicable to a wide range of RL algorithms.
-
A Framework for understanding and Visualizing Strategies of RL Agents
We present a framework for learning comprehensible models of sequential decision tasks in which agent strategies are characterized using temporal logic formulas.
-
Outcome-Guided Counterfactuals for Reinforcement Learning Agents from a Jointly Trained Generative Latent Space
We present a novel generative method for producing unseen and plausible counterfactual examples for reinforcement learning (RL) agents based upon outcome variables that characterize agent behavior.
-
Confidence Calibration for Domain Generalization under Covariate Shift
We present novel calibration solutions via domain generalization. Our core idea is to leverage multiple calibration domains to reduce the effective distribution disparity between the target and calibration domains for…
-
Interestingness Elements for Explainable Reinforcement Learning: Understanding Agents’ Capabilities and Limitations
We propose an explainable reinforcement learning (XRL) framework that analyzes an agent’s history of interaction with the environment to extract interestingness elements that explain its behavior.