To content
Department of Computer Science
Research

New Insights into the Explainability of Artificial Intelligence

A doctor looks at medical graphics floating in the air © AdobeStock​​/​​ipopba

A study by Prof. Christian Janiesch from the Department of Computer Science at TU Dortmund University challenges a common assumption in AI research: "The more powerful the methodology, the harder it is to explain." Using tests with medical image diagnostics, he and colleagues from Würzburg and Magdeburg were able to show that physicians found individual AI analyses sometimes better and sometimes worse to understand than previously assumed on the basis of mathematical and programmatic considerations. The results have been published in the International Journal of Information Management.

Behind the umbrella term artificial intelligence are various methods that are nowadays de facto all based on machine learning, i.e. the programs have learned independently how to decide. Most of the scientific studies to date that have looked at how explainable these systems are have done so using assumptions based on mathematical principles. They concluded that explainability decreases as performance increases. In other words, the more complex the application, the more difficult it is to explain. Deep artificial neural networks are therefore extremely powerful, but not comprehensible to humans, while decision trees are usually less powerful, but well explainable.

Prof. Janiesch and his team approached the question of explainability from a different perspective: Instead of relying on theoretical models, they analyzed the assessment of professionals who work with AI on a daily basis. "Technological solutions can only be successful in the long term if they are applied by people who also understand the problem," explains the business informatics professor. This socio-technical approach made it possible to look at explainability from a real-world application and practice perspective. To do this, the researchers worked with medical professionals who assessed how comprehensibly different AI methods processed symptoms of heart disease or brain scans.

While previous studies assumed a linear or curvilinear relationship between performance and explainability, the research by Prof. Janiesch and his team showed a group-like pattern in the explainability of AI systems. Assumptions about the performance of the systems were generally confirmed, but there were divergent results for explainability. Some models that were previously thought to be more explainable were more likely to be poorly understood by experts, or vice versa.

While the deep artificial neural networks were rated the same, there were differences especially for decision trees. The users rated them as much better explainable than the previous considerations suggested. Prof. Janiesch adds: "Our research shows that the explainability of AI should not only be based on mathematical analyses, but on the perspective of those who work with this technology in practice. After all, when AI is used, users must first build trust, and this can only be done if work is intelligible."

The findings highlight the need for greater involvement of business users* in the process of AI development and deployment to ensure that the technology is not only powerful, but also explainable and practical to use. This is particularly relevant when it comes to making critical decisions based on AI predictions, such as in medicine.

About the publication

Lukas-Valentin, Herm, Kai Heinrich, Jonas Wanner, Christian Janiesch: Stop ordering machine learning algorithms by their explainability! A user-centered investigation of performance and explainability. International Journal of Information Management, Volume 69, 2023. https://doi.org/10.1016/j.ijinfomgt.2022.102538

To the press release

www.tu-dortmund.de/universitaet/aktuelles/detail/neue-erkenntnisse-zur-erklaerbarkeit-von-kuenstlicher-intelligenz-34642/