Artificial intelligence (AI) holds great promise for enabling Europe to sustainably manage growing air traffic and an increasingly complex airspace. But to unleash its full potential requires making AI applications explainable and with that trustworthy, says Mobyen Uddin Ahmed, Associate Professor in Artificial Intelligence/Computer Science at Mälardalen University, Sweden, and project coordinator of ARTIMATION.

What is meant by a black box? Why does AI have to be explainable?

“Black Box” refers to a system whose inputs and outputs are known, but its internal mechanism is unknown to humans. How does the decision making system work? Why did the AI take that specific decision? Take Apple’s SIRI or Google’s Alexa; we can request them to play random songs, without knowing how those songs were chosen. Knowing how music choices are made is no big deal, but the same cannot be said for decision making in air traffic management (ATM), a safety critical industry. Nowadays, ATM operators do not know “why” or “how” a certain decision has been taken, which lowers their trust in the system. Making AI “explainable” is therefore a critical step to improving trust and reliability in the interaction between humans and AI.

The SESAR 3 JU has many projects addressing AI. How does your project compare/contrast with the others?

Indeed, there are several projects looking at AI applications to help cope with complex and dynamic ATM scenarios. The particularity of ARTIMATION project is using AI advanced techniques to help controllers visualise the decision-making processes in an acceptable timeframe. For example, in ARTIMATION we are validating a 3D conflict detection and resolution tool developed with explainable (XAI) algorithms to investigate if and how controllers understand and accept the XAI outcomes.

How difficult is it to introduce AI/ML into ATM?

In ATM, there are many experts with years of in-depth operational experience in dealing with complex scenarios. ARTIMATION is looking at how to transfer that experience and knowledge into the system and to collect quality data in order to train the AI algorithms. We are also looking at predicting how the AI will go on learning, adapting itself to a changing world, with various inputs. Lifelong machine learning with human-centred AI is in fact one of the core elements of the project.

How could the results of your project be used by the authorities, air navigation service providers or end users?

ARTIMATION will explore AI explainability in two tools:  a conflict detection and resolution tool and a delay propagation tool. Both tools will be useful to better understand the next steps in the ATM field to increase the use of automation to support controllers, ANSPs and generic end users in their activities. Within the project we will also draw up guidelines to support XAI feasibility in different application and further detail knowledge on explainability.

What benefits do you hope your project will bring?

A more understandable AI will help improve safety and system performance, adapting the use to the context. As the complexity of ATM systems will grow in the future, there is a growing need for improving the system, aiming at optimisation that will lead to an improvement of the overall sector performance.

Read about the project

More about AI in ATM


The project work has been funded by the European Commission under  H2020-EU.3.4.7., G.A. ID 894238.