Society is becoming increasingly dependent on artificial intelligence (AI) which raises the importance of installing trust and security in its use. This becomes easier once humans understand how AI systems think and operate.
The aim of ARTIMATION was to address challenges related to transparency of automated systems in air traffic management using explainable AI (XAI ). The research was limited to main use cases: Conflict detection and resolution; and delay prediction and propagation. It proposed tools which aim to improve explainability through AI algorithms based on data-driven storytelling and immersive analytics with the purpose of assessing the effectiveness of different visualization techniques.
Both conflict detection and resolution and delay prediction and propagation concepts are useful applications that support controllers’ everyday tasks and help air navigation service providers to improve performance in air traffic management, including the human having full control of the AI decision support. By introducing data-driven and user-driven storytelling for each use case, researchers were able to help explore how the machine learning could be applied to support controllers, air navigation service providers, and generic end users in their activities.
The project also explored how to integrate different levels of explanation in an adaptive passive brain-computer interface. This would enable the AI development to fit controllers’ contextual explainability needs and accommodate changes in mental and emotional states (e.g., workload, stress) measured by neurophysiological measures.
ARTIMATION represents a small step along the path to building trust and dependency on AI systems. It demonstrated the importance of effective and immersive data visualization towards increasing end-users’ acceptance, using examples of machine learning. The main outcome of this effort was an improved understanding of how machine learning should be developed, and the identification of measures aimed at keeping the human-in-the-loop through transparent AI models provided by novel data visualization.
Benefits
- Transparent AI models
- Increased trust in AI
- Human-centred design