This paper presents a vision-aided navigation pipeline to support the approach and landing phase of autonomous Vertical Take-Off and Landing aircraft in Urban Air Mobility scenarios. The proposed filtering scheme is fed by measurements provided by an Inertial Measurement Unit and a GNSS receiver, as well as by pose estimates computed from images collected by onboard cameras. Specifically, the camera frames are processed by a Convolutional Neural Network (CNN) trained to detect the vertiport landing marking in urban scenarios. Subsequently, the relevant 2D features of the pattern inside the resulting bounding box are extracted, recognized and used to solve the Perspective-n-Point problem. The performance of the implemented navigation filter is first analyzed using synthetic data collected simulating realistic landing trajectories. Then, two different training strategies are compared to verify the contribution of real data to the detection performance and to check the capability of the CNN to correctly identify the pattern in the tested scenarios. In addition, the entire pipeline for landing pad detection and pose estimation is tested on real images under various pose, illumination and background conditions.
Vision-aided approach and landing through AI-based vertiport recognition / Veneruso, Paolo; Miccio, Enrico; Opromolla, Roberto; Fasano, Giancarmine; Gentile, Giacomo; Tiana, Carlo. - (2023), pp. 1270-1277. (Intervento presentato al convegno 2023 International Conference on Unmanned Aircraft Systems (ICUAS) tenutosi a Warsaw nel 06-09 Giugno 2022) [10.1109/ICUAS57906.2023.10155914].
Vision-aided approach and landing through AI-based vertiport recognition
Paolo Veneruso;Enrico Miccio;Roberto Opromolla;Giancarmine Fasano;
2023
Abstract
This paper presents a vision-aided navigation pipeline to support the approach and landing phase of autonomous Vertical Take-Off and Landing aircraft in Urban Air Mobility scenarios. The proposed filtering scheme is fed by measurements provided by an Inertial Measurement Unit and a GNSS receiver, as well as by pose estimates computed from images collected by onboard cameras. Specifically, the camera frames are processed by a Convolutional Neural Network (CNN) trained to detect the vertiport landing marking in urban scenarios. Subsequently, the relevant 2D features of the pattern inside the resulting bounding box are extracted, recognized and used to solve the Perspective-n-Point problem. The performance of the implemented navigation filter is first analyzed using synthetic data collected simulating realistic landing trajectories. Then, two different training strategies are compared to verify the contribution of real data to the detection performance and to check the capability of the CNN to correctly identify the pattern in the tested scenarios. In addition, the entire pipeline for landing pad detection and pose estimation is tested on real images under various pose, illumination and background conditions.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.