Author: Supun Samarasekera
-
SIGNAV: Semantically-Informed GPS-Denied Navigation and Mapping in Visually-Degraded Environments
We present SIGNAV, a real-time semantic SLAM system to operate in perceptually-challenging situations.
-
Global Heading Estimation for Wide Area Augmented Reality Using Road Semantics for Geo-referencing
We present a method to estimate global camera heading by associating directional information from road segments in the camera view with annotated satellite imagery.
-
Long-Range Augmented Reality with Dynamic Occlusion Rendering
This paper addresses the problem of fast and accurate dynamic occlusion reasoning by real objects in the scene for large scale outdoor AR applications.
-
MaAST: Map Attention with Semantic Transformers for Efficient Visual Navigation
Through this work, we design a novel approach that focuses on performing better or comparable to the existing learning-based solutions but under a clear time/computational budget.
-
RGB2LIDAR: Towards Solving Large-Scale Cross-Modal Visual Localization
We study an important, yet largely unexplored problem of large-scale cross-modal visual localization by matching ground RGB images to a geo-referenced aerial LIDAR 3D point cloud.
-
Semantically-Aware Attentive Neural Embeddings for 2D Long-Term Visual Localization
We present an approach that combines appearance and semantic information for 2D image-based localization (2D-VL) across large perceptual changes and time lags.
-
Multi-Sensor Fusion for Motion Estimation in Visually-Degraded Environments
This paper analyzes the feasibility of utilizing multiple low-cost on-board sensors for ground robots or drones navigating in visually-degraded environments.
-
Augmented Reality Driving Using Semantic Geo-Registration
We propose a new approach that utilizes semantic information to register 2D monocular video frames to the world using 3D georeferenced data, for augmented reality driving applications.
-
Utilizing Semantic Visual Landmarks for Precise Vehicle Navigation
This paper presents a new approach for integrating semantic information for vision-based vehicle navigation.
-
Sub-Meter Vehicle Navigation Using Efficient Pre-Mapped Visual Landmarks
This paper presents a vehicle navigation system that is capable of achieving sub-meter GPS-denied navigation accuracy in large-scale urban environments, using pre-mapped visual landmarks.
-
AR-Weapon: Live Augmented Reality Based First-Person Shooting System
This paper introduces a user-worn Augmented Reality (AR) based first-person weapon shooting system (AR-Weapon), suitable for both training and gaming.
-
AR-Mentor: Augmented Reality Based Mentoring System
The system combines a wearable Optical-See-Through (OST) display device with high precision 6-Degree-Of-Freedom (DOF) pose tracking and a virtual personal assistant (VPA) with natural language, verbal conversational interaction, providing guidance to the user in the form of visual, audio and locational cues.