28 June 2024

ALGORITHM TYPES IN XAI

Since XAI is a recently trending field due to the increasing usage of black box-based ML algorithms in critical applications, there are many publications regarding both technical details and taxonomy of XAI algorithms. Recently, many publications have offered a taxonomy for XAI methodologies, yet building a common hierarchy based on these is a complex task. Therefore, it is meaningful to construct a multi-faceted taxonomy that handles different characteristics of XAI algorithms. Based on the research and offerings suggested in [1], we will scan various parts of this multi-faceted taxonomy and their subclasses.

 

The study offers different views on the classification of XAI algorithms: the functioning-based approach, the result-based approach, the conceptual approach, and the mixed approach. Additionally, the problem type and the output type can be used as a classifier when inspecting XAI algorithms, as we did in the 3rd part of our blog post series. Let us begin to discuss all of these headers in detail individually.

 

The Functioning-Based Approach

 

The central aspect behind this classification approach of XAI algorithms is the way algorithms treat the ML model to be explained. To be precise, the algorithm is expected to extract some information based on the ML model, and this operational step can be defined in terms of functions. We can divide these functions into sets as follows:

 

  • Explaining with local perturbations: This kind of algorithm perturbates the input of the ML model to find out the most meaningful features in the data. Imagine altering or corrupting a specific feature in the input data and performing the inference again. If the output of the ML model is changed significantly, the altered feature set will likely be meaningful for the decision process. Therefore, the output of such algorithms is a feature importance ranking.

 

  • Leveraging structure: Some methods use the internal characteristics of ML models to explain their behavior. A typical example would be gathering information through gradients in neural networks. The gradients can give clues about the importance of the input values. Therefore, this scenario with a leveraging structure would result in a feature importance ranking.

 

  • Meta explanations: Methods that use meta explanations do not investigate the ML model directly. They use explanations that are the output of other explanation methods and ensemble multiple explanations to understand feature importance scores better.

 

  • Architecture modification: Methods in this category benefit from the complex architectural structures of ML models. The architecture is simplified to better examine the model's behavior, and the results may become more understandable.

 

  • Examples: Examples are selected from the tested data based on their performance in the inference. If an ML model can process an example with high confidence, that example would become a good candidate for explaining the behavior of the ML model.

 

The Result-Based Approach

 

This taxonomy classifies XAI methods based on their resulting explanation type.

 

  • Feature importance: As stated in the previous topic, many XAI algorithms result in feature importance scoring. This process determines which input data features are relevant to the final decision.

 

  • Surrogate models: Surrogate models are trained with input-output pairs of a given ML model. These pairs are not necessarily parallel to ground truth since the goal is to explain the model’s behavior; its outputs are also used even if they are incorrect. Surrogate models are often highly interpretable, making extracting explanations easier in this case.

 

  • Examples: Another method of explanation involves using representative examples. For instance, data points from a model’s training set that exhibit particularly high or low confidence in classification can serve as such examples.

 

The Conceptual Approach

 

This approach classifies XAI methods based on the concepts introduced in the 2nd part of this blog series:

 

  • Stage: Identicates XAI algorithms as ante-hoc or post-hoc. Ante-hoc methods refer to creating explanations during the model training process. On the other hand, post-hoc methods solely work on previously trained (or frozen) models.

 

  • Applicability: Identicates XAI algorithms as model-agnostic or model-specific. A model-agnostic method is an explainability method generally applied across all models. This means such methods can work compatibly with various models regardless of the type of artificial intelligence model used. On the other hand, the term model specific means that an explainability method is designed to be specific to certain models.

 

  • Scope: Identicates XAI algorithms as local or global. A global method is an explanation technique that provides a general explanation encompassing all data points. It informs about the model's overall behavior or general trends. For example, an explainability method could address the entire dataset to understand the general characteristics of a classification model. On the other hand, the term "Local" indicates that the explanation is limited to a specific point or a few points. In this case, the explanation method focuses on explaining the model's decisions at those particular data points, often identified as interesting or critical examples.

 

            In addition, XAI methods can be classified based on their output format (as in 3rd part of this blog series) or type of the original problem (such as classification, regression, etc).

           

Mixed Approach

 

            It is worth mentioning that some taxonomies are created by using a hierarchy that uses a hybrid version of the explained concepts.

 

Conclusion

 

In conclusion, a diverse and sophisticated taxonomy of XAI methodologies has emerged as the field of XAI continues to grow with the increasing use of opaque ML models in critical applications. This taxonomy, detailed through various approaches in our blog series, aims to provide a structured understanding of how these algorithms function and the types of explanations they produce.

 

We have explored several classification strategies for XAI methods, including functioning-based, result-based, and conceptual approaches, each offering unique insights into the algorithms' operations and outputs. For example, functioning-based approaches categorize XAI methods according to their interaction with the ML models. In contrast, result-based approaches focus on the type of explanation output, such as feature importance or surrogate models.

 

Furthermore, the conceptual approach provides a broader classification based on the stage of implementation (ante-hoc vs. post-hoc), applicability (model-agnostic vs. model-specific), and scope (local vs. global explanations). These categorizations help understand the wide range of available techniques and their suitability for different scenarios.

 

Lastly, the introduction of mixed approaches in the taxonomy acknowledges XAI's complexity and multifaceted nature, suggesting that a hybrid classification can effectively capture the nuanced characteristics of various XAI methods.

 

As the field progresses, it will be crucial to continue refining these taxonomies to better guide researchers and practitioners in selecting, developing, and applying the most appropriate XAI methods for their specific needs, thereby enhancing transparency and trust in machine learning applications.

 

References

 

[1] Speith, Timo. "A review of taxonomies of explainable artificial intelligence (XAI) methods." Proceedings of the 2022 ACM conference on fairness, accountability, and transparency. 2022