Please use this identifier to cite or link to this item: http://hdl.handle.net/1942/43236
Title: Explainable artificial intelligence
Authors: ROUSSEAU, Axel-Jan 
GEUBBELMANS, Melvin 
VALKENBORG, Dirk 
BURZYKOWSKI, Tomasz 
Issue Date: 2024
Publisher: MOSBY-ELSEVIER
Source: American journal of orthodontics and dentofacial orthopedics, 165 (4) , p. 491 -494
Abstract: I n previous articles, we discussed several machine learning (ML) methods that can be used to address classification or regression tasks. Many of these methods are considered "black boxes" as they involve a large number of coefficients, making it impossible to understand their predictions and decisions. There are many reasons why an ML model may perform badly, and this lack of transparency may make it difficult to trace the source of the problem. For instance, real-world applications are often very different from the cases included in the dataset that was used for training the model. There is a famous anecdote , which was also tested, 1 about a classifier that was trained to distinguish between images of wolves and huskies. The model performed very well on the test set but failed miserably when used in the real world. On closer inspection, it turned out that many of the wolf images used for training and testing were taken outside with a snowy background. Because of that, the model learned to classify wolves on the basis of the presence of the snow in the picture. This gave reasonable results for the images used for training and testing but not when the images were taken in the real world in different seasons. Even if the data used during training are a good reflection of reality, they may be a good representation only at that time. However, the data distribution may change (drift) over time, leading to bad performance of a model trained on earlier data. Continuing with the previous example, wolves are easier to track in winter-time, so in the past, pictures were mostly taken in landscapes covered with snow. However, nowadays, automatic wildlife cameras are cheap and omnipresent and can take pictures throughout the year. In another example, consider a self-driving car that was trained with thousands of hours of representative video data of traffic. Ten years from now, many new makes and car models will drive the streets, none of which were seen during training. To guarantee safety over time, it is necessary to understand how the model will react to these new data. Models (and the data they are trained on) may also have hidden biases and, for instance, learn to discriminate on the basis of minority groups. This may lead to unfair and unethical decisions. As ML is used in many sensitive domains, this may become a serious issue, which should be detected and the source of which should be identified. One famous example is Amazon's artificial intelligence (AI) recruiting tool, which turned out to be biased against women. 2 Translation services like Google Translate have also shown gender bias and stereotypes. 3 When translating the Finnish gender-neutral pronoun "h€ an" to English, "h€ an on l€ a€ ak€ ari" translates to "he is a doctor," but "h€ an on sairaanhoi-taja" translates to she is a nurse. Google has since fixed this by showing both gendered translations, but similar biases may still exist in many language models. Deep learning models could create new relationships in datasets that experts may have missed. These learned associations do not necessarily imply causation but could be used to create new hypotheses to test. However, for domain experts to extract and use this knowledge, it is necessary to interpret the model. Therefore, there has been a surge in research on explainable artificial intelligence (XAI) methods that can be used to make models more interpretable. The results of these methods can be presented in different ways: a method can generate a textual explanation; heatmaps can be used to indicate visually important parts in an image; and summary statistics or scores may be assigned to the explanatory variables (features) used in a model to indicate their importance in the modeling process. There are different approaches to how and what an XAI method tries to explain. Some methods try to explain predictions locally for a single observation; others take a global view and try to explain the predictive performance
Notes: Burzykowski, T (corresponding author), Hasselt Univ, Data Sci Inst, Agoralaan 1,Bldg D, B-3590 Diepenbeek, Belgium.
tomasz.burzykowski@uhasselt.be
Document URI: http://hdl.handle.net/1942/43236
ISSN: 0889-5406
e-ISSN: 1097-6752
DOI: 10.1016/j.ajodo.2024.01.006
ISI #: 001226828600001
Rights: 2024 by the American Association of Orthodo
Category: A2
Type: Journal Contribution
Appears in Collections:Research publications

Files in This Item:
File Description SizeFormat 
Explainable artificial intelligence.pdf
  Restricted Access
Published version96.71 kBAdobe PDFView/Open    Request a copy
Show full item record

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.