Author: Yi Yao
-
Time-Space Processing for Small Ship Detection in SAR
This paper presents a new 3D time-space detector for small ships in single look complex (SLC) synthetic aperture radar (SAR) imagery, optimized for small targets around 5-15 m long that are unfocused due to target motion induced by ocean surface waves.
-
Generating and Evaluating Explanations of Attended and Error-Inducing Input Regions for VQA Models
Error maps can indicate when a correctly attended region may be processed incorrectly leading to an incorrect answer, and hence, improve users’ understanding of those cases.
-
Modular Adaptation for Cross-Domain Few-Shot Learning
While literature has demonstrated great successes via representation learning, in this work, we show that improvement of downstream tasks can also be achieved by appropriate designs of the adaptation process.
-
Confidence Calibration for Domain Generalization under Covariate Shift
We present novel calibration solutions via domain generalization. Our core idea is to leverage multiple calibration domains to reduce the effective distribution disparity between the target and calibration domains for improved calibration transfer without needing any data from the target domain.
-
Hybrid Consistency Training with Prototype Adaptation for Few-Shot Learning
We introduce Hybrid Consistency Training to jointly leverage interpolation consistency, including interpolating hidden features, that imposes linear behavior locally and data augmentation consistency that learns robust embeddings against sample variations.
-
Stacked Spatio-Temporal Graph Convolutional Networks for Action Segmentation
We propose novel Stacked Spatio-Temporal Graph Convolutional Networks (Stacked-STGCN) for action segmentation, i.e., predicting and localizing a sequence of actions over long videos.
-
Lucid Explanations Help: Using a Human-AI Image-Guessing Game to Evaluate Machine Explanation Helpfulness
We propose a Twenty-Questions style collaborative image retrieval game as a method of evaluating the efficacy of explanations (visual evidence or textual justification) in the context of Visual Question Answering.
-
Evaluating Visual-Semantic Explanations using a Collaborative Image Guessing Game
Abstract While there have been many proposals on making AI algorithms explainable, few have attempted to evaluate the impact of AI-generated explanations on human performance in conducting human-AI collaborative tasks. To bridge the gap, we propose a Twenty-Questions style collaborative image retrieval game, Explanation-assisted Guess Which (ExAG), as a method of evaluating the efficacy of…