Relative visual navigation based on CNN in a proximity operation space mission
A. D’Ortona, G. Daddi
download PDFAbstract. This article explores a solution utilizing a convolutional neural network (CNN) to simulate robust monocular visual navigation during proximity operations of a space mission, where a precise determination of relative pose is crucial for mission safety. This operation involves closely observing a spacecraft with a CubeSat under challenging illumination conditions. The methodology involves generating a dataset using Blender software and training a Mask-CNN with a ResNet-50 architecture to identify relevant features representing the target’s 3D model. The dataset’s ground truth is obtained through an inverse Perspective-n-Point (PnP) problem. Overall, this work provides valuable insights into the potential of deep learning-based visual navigation techniques for enhancing space mission operations.
Keywords
Navigation, Space, CNN
Published online 9/1/2023, 6 pages
Copyright © 2023 by the author(s)
Published under license by Materials Research Forum LLC., Millersville PA, USA
Citation: A. D’Ortona, G. Daddi, Relative visual navigation based on CNN in a proximity operation space mission, Materials Research Proceedings, Vol. 33, pp 9-14, 2023
DOI: https://doi.org/10.21741/9781644902677-2
The article was published as article 2 of the book Aerospace Science and Engineering
Content from this work may be used under the terms of the Creative Commons Attribution 3.0 license. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.
References
[1] al., D. A. (2012). Solving the PnP Problem for Visual Odometry – An Evaluation of Methodologies for Mobile Robots. Conference: Conference Towards Autonomous Robotic Systems, 451-452. https://doi.org/10.1007/978-3-642-32527-4_54
[2] Black, K. &. (2021). Real-Time, Flight-Ready, Non-Cooperative Spacecraft Pose Estimation Using Monocular Imagery.
[3] Fischler, M. A. (1981). Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM 24.6, 381-395. https://doi.org/10.1145/358669.358692
[4] Fravolini, S. F. (2010). A Robust Monocular Visual Algorithm for Autonomous Robot Application. IFAC Proceedings Volumes 43.16, 551–556. https://doi.org/10.3182/20100906-3-IT-2019.00095
[5] He, K., Gkioxari, G., Dollar, P., & Girshick, R. (2018). Mask R-CNN. CoRR. https://doi.org/10.1109/ICCV.2017.322
[6] Park, T. H., Sharma, S., & D’Amico, S. (2019). Towards Robust Learning-Based Pose Estimation of Noncooperative Spacecraft. ArXiv.
[7] Sharma, S., & D’Amico, S. (2019). Pose estimation for non-cooperative spacecraft.