28 June 2024

EXPLANATION TYPES OF XAI

Before diving into the details of XAI methods, considering the input and output types of XAI can be beneficial in imagining the task and having a comprehensive understanding of it. Typically, one of the inputs of XAI is a machine learning model we want to explain the behavior of, which can be trained for any task. In addition, based on the type of XAI method, the algorithm can require a simple data point or a whole dataset itself. Based on these, the XAI method creates an explanation of the model’s behavior. According to [1], there can be several output types, and they are exemplified below with possible scenarios.

 

Numerical Explanations

 

A typical example of numerical outputs would be feature importance. Feature importance in the context of machine learning and artificial intelligence is like understanding which ingredients in a recipe are most crucial to the dish's success. Imagine you're baking a cake. Certain ingredients, like flour and eggs, are fundamental—they have a high "importance" because changing them significantly affects the outcome of your cake. The texture and taste will be noticeably different if you use too much or too little.

 

Similarly, in machine learning models that make predictions (like guessing the price of a house), "features" are like the ingredients. These features could include the size of the house, its location, the number of bedrooms, etc. Feature importance tells us which factors have the most significant impact on the model's predictions. For instance, the model might find that location substantially affects the price (like flour to the cake), while the house's color might have a minimal impact (like adding a pinch of salt to your cake). Understanding feature importance helps us know what information the model uses to make its decisions, just like knowing which ingredients are crucial to making your cake taste good.

 

Imagine you're selling your house, and you use an app that predicts its market price. This app considers details like your house's size, age, location, and how many bedrooms and bathrooms it has. After analyzing this information, the app tells you the estimated price and how much each detail—like the size or location—affected that price in simple numbers. For example, it might say that being close to a school added $5,000 to your house’s value, but its age lowered it by $3,000. This way, you get a clear, numerical snapshot of what makes your house more or less valuable in the market, making it easier to understand how the predicted price was determined.

 

Here, the original model just takes house information and gives a price, but the XAI method helps to explain the relationship between the specific information and price.

 

Rules

 

Imagine using a fitness app that evaluates your yoga poses through photos. You take a picture of yourself performing a pose, and the app uses a complex algorithm to analyze your posture, alignment, and form. It's not just looking at the picture; it is understanding the nuances of your position, comparing it with ideal yoga poses, and considering factors that aren't immediately obvious from just looking at the image.

 

However, instead of giving you complex feedback filled with technical jargon, the app simplifies its analysis into a scalar score on a scale of 1 to 10, where 10 represents a perfect pose. Alongside the score, it provides a rule-based explanation like, "Your score is seven because your arms need to be straighter and your feet wider apart for better balance."

 

This scenario involves an explainability method where the complex image-based analysis by the app (which likely involves deep learning techniques to process and evaluate your yoga pose from the photo) is distilled into straightforward, rule-based feedback. This feedback gives you a precise score and simple, actionable advice on improving and translating sophisticated image analysis into something you can easily understand and act upon.

 

Textual Explanations

 

Imagine using an app that predicts the best time to post photos on social media for maximum engagement. Using a complex machine learning model, the app analyzes patterns like when your followers are most active, what kind of content gets more likes, and so on. However, when it gives you advice, it doesn’t just spit out numbers or charts; instead, it provides a textual explanation that's easy to understand.

 

For example, after crunching the data, the app might tell you, "Your followers are most active on weekends in the evening. Photos related to outdoor activities get 20% more likes. Based on this, we suggest posting your hiking trip photos this Saturday at 7 PM for the best engagement."

 

This scenario uses a technique to translate the model’s complex data analysis into a simple, textual explanation. The technique could be an explainable AI method like LIME (Local Interpretable Model-agnostic Explanations), which helps break down the model's prediction into easier-to-understand reasons. This way, even though the underlying AI isn’t based on text, it communicates its suggestions in a friendly, straightforward manner, clarifying why it’s giving you specific advice.

 

Visual Explanations

 

Example 1

 

Imagine using a health app that predicts your stress level based on your sleep pattern, physical activity, heart rate, and daily schedule. The app uses a complex machine-learning model that analyzes all this data to estimate your stress level. However, the app shows you a visual chart instead of numbers or technical explanations.

 

This chart might use colors and sizes to represent different factors: blue for sleep, green for physical activity, red for heart rate, and yellow for your schedule. Each factor's impact on your stress level is shown by the size of its colored section on the chart. So, if you see an extensive red section, your heart rate significantly impacts your stress level, suggesting you need to relax more.

 

This visual explanation method helps you quickly see what's affecting your stress level the most without understanding the complex data analysis behind it. The app could use a technique like feature importance visualization, which takes the model’s abstract findings and turns them into an easy-to-understand graphic, making the insights from the model accessible to everyone, regardless of their technical background.

 

Example 2

 

Imagine using a photo management app that automatically categorizes your photos based on their content, like landscapes, portraits, or events. The app uses a complex machine learning model to analyze the images and understand what's in them—mountains, faces, or birthday parties. However, the app doesn’t just tell you the category; it shows you why it categorized a photo a certain way.

 

When you click on a photo, the app highlights areas of the image that influenced its decision. For instance, categorizing a photo as a landscape might highlight the sky and mountains. If it's a portrait, it could outline faces. This visual feedback helps you understand the app's decision-making process at a glance.

 

This scenario uses a visual explainability method to make the model's decisions transparent. By visually highlighting what features in the photo led to its categorization, the app provides an intuitive understanding of the complex image recognition process. This method bridges the gap between the app's sophisticated visual analysis and the user's need for simple, comprehensible explanations, enhancing trust and clarity without delving into technical details.

 

Mixed Explanations

 

Imagine using an intelligent cooking app that suggests recipes based on ingredients you have at home. You take a picture of your kitchen pantry, and the app uses a complex machine-learning model to identify the ingredients in the photo. Then, it suggests a recipe based on what it finds, your dietary preferences, and your cooking history.

 

The app explains its suggestion in a mixed modality: visually, by highlighting the ingredients it recognized in your pantry photo; textually, by listing those ingredients and explaining why they make the recipe a good choice (e.g., "Using your fresh tomatoes and basil, we suggest making a Margherita pizza because it's quick and you loved Italian recipes in the past."); and numerically, by showing a score that represents how well the recipe matches your preferences and ingredient availability.

 

This mixed explanation method combines visual cues, textual descriptions, and numerical scores to provide a comprehensive, easy-to-understand rationale behind the app's recipe suggestions. This approach ensures that you get personalized recipe recommendations and clearly understand why these recipes are suggested, making the decision-making process transparent and engaging.

 

References

 

[1] Vilone, Giulia, and Luca Longo. "Classification of explainable artificial intelligence methods through their output formats." Machine Learning and Knowledge Extraction 3.3 (2021): 615-661.