Spacecraft missions venturing beyond Earth rely upon ranging, specific payloads or supports systems, claiming the usage of facilities such as Deep Space Network or ESTRACK. This requires a significant amount of resources both on Earth and onboard, especially in monetary terms. Furthermore, satellite link is not always guaranteed, and results are not available in real-time. Therefore, to cruise independently of Earth-based operators and to achieve the requirements raised by the next planetary exploratory missions, this manuscript proposes a novel visual-based terrain relative navigation (TRN) system. TRN is promising because can be applied to a wide range of space missions, e.g. planetary exploration (rocky planets), the study of moons of gaseous planets, approach phase of comets, asteroid, and other celestial bodies. In essence, a spacecraft can retrieve its absolute position by matching a pattern of observed craters with a database. The measurements thus obtained can be integrated into a navigation filter to estimate the spacecraft state (position and velocity). The ability to detect match surface features to a map is crucial for TRN. However, craters largely vary their appearances also depending on image qualities, lighting geometry, and noises. For these reasons, realizing a crater detector able to generalize to different scenarios is complex. It is worth considering that this task must be performed in a robust way to keep high the navigation accuracy. In past, this has led to least square approaches, creating situations where corrupted navigation states render otherwise good images ineffectual, leading to unnecessary filter reinitializations, trajectory aborts (e.g. during lunar descent), or other undesirable events. Contrarily, the solution proposed is reliable, combining the strengths of a region-based convolutional neural network (Mask R-CNN) with the robustness of projective invariants theory. An extended Kalman filter completes the TRN system, further increasing the stability of the system. Despite the usage of medium resolution (118 m/px) data, results showed that the navigation accuracy lies below 400 meters in the best-case scenario for a satellite orbiting around the Moon at about 50 km altitude. This is expected to guarantee real-time autonomous onboard operations with no need for ground support.
Crater-based Autonomous Position Estimation in Planetary Missions by Deep-Learning / Del Prete, R.; Renga, A.. - B2:(2021). (Intervento presentato al convegno IAF Space Communications and Navigation Symposium 2021 at the 72nd International Astronautical Congress, IAC 2021 tenutosi a are nel 2021).
Crater-based Autonomous Position Estimation in Planetary Missions by Deep-Learning
Del Prete R.;Renga A.
2021
Abstract
Spacecraft missions venturing beyond Earth rely upon ranging, specific payloads or supports systems, claiming the usage of facilities such as Deep Space Network or ESTRACK. This requires a significant amount of resources both on Earth and onboard, especially in monetary terms. Furthermore, satellite link is not always guaranteed, and results are not available in real-time. Therefore, to cruise independently of Earth-based operators and to achieve the requirements raised by the next planetary exploratory missions, this manuscript proposes a novel visual-based terrain relative navigation (TRN) system. TRN is promising because can be applied to a wide range of space missions, e.g. planetary exploration (rocky planets), the study of moons of gaseous planets, approach phase of comets, asteroid, and other celestial bodies. In essence, a spacecraft can retrieve its absolute position by matching a pattern of observed craters with a database. The measurements thus obtained can be integrated into a navigation filter to estimate the spacecraft state (position and velocity). The ability to detect match surface features to a map is crucial for TRN. However, craters largely vary their appearances also depending on image qualities, lighting geometry, and noises. For these reasons, realizing a crater detector able to generalize to different scenarios is complex. It is worth considering that this task must be performed in a robust way to keep high the navigation accuracy. In past, this has led to least square approaches, creating situations where corrupted navigation states render otherwise good images ineffectual, leading to unnecessary filter reinitializations, trajectory aborts (e.g. during lunar descent), or other undesirable events. Contrarily, the solution proposed is reliable, combining the strengths of a region-based convolutional neural network (Mask R-CNN) with the robustness of projective invariants theory. An extended Kalman filter completes the TRN system, further increasing the stability of the system. Despite the usage of medium resolution (118 m/px) data, results showed that the navigation accuracy lies below 400 meters in the best-case scenario for a satellite orbiting around the Moon at about 50 km altitude. This is expected to guarantee real-time autonomous onboard operations with no need for ground support.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.