UAV Pose Estimation using Cross-view Geolocalization with Satellite Imagery
We propose an image-based cross-view geolocalization method that estimates the global pose of a UAV with the aid of georeferenced satellite imagery. Our method consists of two Siamese neural networks that extract relevant features despite large differences in viewpoints. The input to our method is an aerial UAV image and nearby satellite images, and the output is the weighted global pose estimate of the UAV camera. We also present a framework to integrate our cross-view geolocalization output with visual odometry through a Kalman filter. We build a dataset of simulated UAV images and satellite imagery to train and test our networks. We show that our method performs better than previous camera pose estimation methods, and we demonstrate our networks ability to generalize well to test datasets with unseen images. Finally, we show that integrating our method with visual odometry significantly reduces trajectory estimation errors.
READ FULL TEXT