Below is a structured summary of the project’s main results, together with references to several journal articles that present developments and validations directly connected to each of the three lines of work.
Line 1 — Development of concepts and automated management systems for urban air mobility (U-Space/UAM/UTM/ATM)
The project has made significant progress in the definition and validation of new operational concepts for the integrated management of urban air traffic, covering the entire spectrum from strategic planning to tactical deconfliction and autonomous control. The simulator developed within the project enables the reproduction of complex urban air mobility scenarios, fleet-level management, advanced authorization processes, and the modelling of collaborative surveillance mechanisms, offering a level of completeness and detail that exceeds what is typically found in the literature. Some of the results associated with this line—particularly those related to cooperative surveillance among aircraft and the integration of sensing capabilities for safe traffic management—can be found in the articles Review and Simulation of Counter-UAS Sensors for Unmanned Traffic Management and Modelling and Simulation of Collaborative Surveillance for Unmanned Traffic Management, which provide an in-depth analysis of sensing architectures and surveillance strategies for advanced UTM environments.
Line 2 — Technologies for data fusion, visualization and interaction for aerial and urban information management
In parallel, the project has developed advanced technologies for the acquisition, interpretation, fusion and visualization of information produced by drones and other sources within the smart city. These technologies include context-aware fusion mechanisms, deep learning models for heterogeneous data understanding, new human–machine interaction paradigms, and immersive solutions for urban control centers. The work also explores innovative forms of natural interaction through mid-air gestures, multimodal visual interpretation using state-of-the-art models, and efficient edge processing techniques to enable on-board AI capabilities in aerial vehicles. Some of the results aligned with this line—particularly those related to gesture-based interaction and the acceptance of new control syntaxes—are presented in Assessing the Acceptance of a Mid-Air Gesture Syntax for Smart Space Interaction: An Empirical Study. Complementary advances in computer vision and multimodal models can be found in Exploring the Use of Contrastive Language-Image Pre-Training for Human Posture Classification: Insights from Yoga Pose Analysis, while the evaluation of AI models deployed on edge hardware for drone applications is detailed in A Performance Analysis of You Only Look Once Models for Deployment on Constrained Computational Edge Devices in Drone Applications.
Line 3 — Demonstrators and advanced applications
Finally, the project’s developments have been validated through demonstrators combining real flights—subject to regulatory constraints—with high-fidelity simulation, applying the developed technologies to multimodal transport, urban planning and security scenarios. These demonstrators enable the assessment of autonomous fleet operations, integration with urban services, and system performance under realistic conditions. Within this line, results related to advanced cooperative behaviours and decision-making based on deep reinforcement learning can be found in Application of Deep Reinforcement Learning to UAV Swarming for Ground Surveillance, which explores distributed coordination strategies applicable to surveillance and urban monitoring missions.
ACKNOWLEDGMENTS
This website is part of the project PID2020-118249RB-C21, funded by MCIN/ AEI/ 10.13039/501100011033/
CONTACTS
E.T.S.I. de Telecomunicación
Avda. Complutense, nº 30
28040 – Madrid
Spain