Rakesh “Teddy” Kumar
Vice President, Information and Computing Sciences and Director, Center for Vision Technologies
Rakesh “Teddy” Kumar, Ph.D., is director of the Center for Vision Technologies in Information and Computing Sciences at SRI International. In this role, he is responsible for leading research and development in the fields of computer vision, robotics, image processing, computer graphics, and visualization algorithms and systems for government and commercial clients.
In 2013, Kumar was honored with the Outstanding Achievement in Technology Development award from his alma mater, University of Massachusetts Amherst, School of Computer Science. He has received the Sarnoff Presidents Award in 2009 and Sarnoff Technical Achievement awards for his work in registration of multi-sensor, multi-dimensional medical images and alignment of video to three-dimensional scene models. The paper “Stable Vision-Aided Navigation for Large-Area Augmented Reality” co-authored by him received the best paper award in the IEEE Virtual Reality 2011 conference.
Kumar has served on NSF review and DARPA ISAT panels. He has also been an associate editor for IEEE Transactions on Pattern Analysis and Machine Intelligence. He has co-authored more than 60 research publications, and received more than 50 patents. Kumar was a principal founder for multiple spin-off companies from Sarnoff Corporation, including VideoBrush, LifeClips, and SSG.
Kumar received his Ph.D. in Computer Science from the University of Massachusetts at Amherst. His M.S. in Electrical and Computer Engineering is from State University of New York at Buffalo, and his B.Tech in Electrical Engineering is from Indian Institute of Technology, Kanpur, India.
Recent publications
-
Machine Learning Aided GPS-Denied Navigation Using Uncertainty Estimation through Deep Neural Networks
We describe and demonstrate a novel approach for generating accurate and interpretable uncertainty estimation for outputs from a DNN in real time.
-
Night-Time GPS-Denied Navigation and Situational Understanding Using Vision-Enhanced Low-Light Imager
In this presentation, we describe and demonstrate a novel vision-enhanced low-light imager system to provide GPS-denied navigation and ML-based visual scene understanding capabilities for both day and night operations.
-
Vision based Navigation using Cross-View Geo-registration for Outdoor Augmented Reality and Navigation Applications
In this work, we present a new vision-based cross-view geo-localization solution matching camera images to a 2D satellite/ overhead reference image database. We present solutions for both coarse search for…
-
Cross-View Visual Geo-Localization for Outdoor Augmented Reality
We address the problem of geo-pose estimation by cross-view matching of query ground images to a geo-referenced aerial satellite image database. Recently, neural network-based methods have shown state-of-the-art performance in…
-
Augmented Reality for Marine Fire Support Team Training
To provide FiSTs with the “sets and reps” required to develop and maintain proficiency, the Office of Naval Research 3D Warfighter Augmented Reality (3D WAR) program is developing an affordable…
-
Optimized Simultaneous Aided Target Detection and Imagery based Navigation in GPS-Denied Environments
We describe and demonstrate a comprehensive optimized vision-based real-time solution to provide SATIN capabilities for current and future UAS in GPS-denied environments.