Collaborative human robot autonomy publications
-
SayNav: Grounding Large Language Models for Dynamic Planning to Navigation in New Environments
We present SayNav, a new approach that leverages human knowledge from Large Language Models (LLMs) for efficient generalization to complex navigation tasks in unknown large-scale environments.
-
Ranging-Aided Ground Robot Navigation Using UWB Nodes at Unknown Locations
This paper describes a new ranging-aided navigation approach that does not require the locations of ranging radios.
-
Graph Mapper: Efficient Visual Navigation by Scene Graph Generation
We propose a method to train an autonomous agent to learn to accumulate a 3D scene graph representation of its environment by simultaneously learning to navigate through said environment.
-
SASRA: Semantically-aware Spatio-temporal Reasoning Agent for Vision-and-Language Navigation in Continuous Environments
This paper presents a novel approach for the Vision-and-Language Navigation (VLN) task in continuous 3D environments.
-
MaAST: Map Attention with Semantic Transformers for Efficient Visual Navigation
Through this work, we design a novel approach that focuses on performing better or comparable to the existing learning-based solutions but under a clear time/computational budget.
-
Constrained Optimal Selection for Multi-Sensor Robot Navigation Using Plug-and-Play Factor Graphs
This paper proposes a real-time navigation approach that is able to integrate many sensor types while fulfilling performance needs and system constraints.
-
Robust Visual Path Following for Heterogeneous Mobile Platforms
We present an innovative path following system based upon multi-camera visual odometry and visual landmark matching.