27 June 2024

XAI, AFFECTED STAKEHOLDERS & REGULATORY ATTEMPTS

XAI: Goal and the Reason of its Importance

 

The term Explainable Artificial Intelligence (XAI) refers to the conversion of the rationales behind the decision of complex AI systems into a human-understandable set of outputs. This kind of output may vary from a set of rules to heatmaps of input images, which typically state the parts of pictures used for deciding by an AI model.

 

Although XAI-related terms such as user-friendliness and transparency have been discussed for many decades, with the rapid development of AI algorithms depending on deep learning, the necessity of more complex explainability methods and more sound definitions of terms related to it has become more requisite. The reason behind this requirement is strongly correlated to the intrinsic structure of deep neural networks. This kind of network is trained based on algorithms that alter the network weights (the networks' parameters that actualize the network's objective based on the input) dependent on vast amounts of data in many iterations. The number of parameters typically varies in degrees from millions to billions. Therefore, it is almost impossible to explain the behavior of deep neural networks in humans. The hard-to-explain (or nearly impossible) AI models by humans are called black-box models since their intrinsic decision-making rules are dark to human understanding. As AI becomes more and more effective in our daily lives, the number of applications used by critical fields such as finance, health care, and autonomous driving increases. This fact makes the need for XAI more significant, as both users and developers would want to explain the underlying reasoning behind the decisions made by AI.

 

Who Needs XAI and Why

Based on the above reasons, the definitions related to XAI and the implementation of methods that actualize it have attracted the attention of different people interacting with AI systems. As noted in [1], XAI is essential to the users who utilize the AI system (1), the people affected by AI decisions (2), and the developers of the AI algorithms (3). An example that belongs to group 1 can be medical doctors who use an AI system that recommends auto-generated diagnosis reports. As a field expert, a medical doctor would typically want to understand the rationale behind the diagnosis to make further explorations. Group 2 consists of the end users of the AI application, such as the patient for the scenario given above or a person whose loan is rejected by an AI-based decision-making system. These users would also want to understand the reasoning behind the decision, as it directly influences their lives. Lastly, the developers of the AI systems, as in Group 3, would want to understand the behavior of their models to increase their robustness and detect their weaknesses. Another usage scenario by the developers would be detecting bias in the data and knowing how to eliminate it.

 

Public Initiatives

DARPA’s XAI Program

 

It is not only the people who interact with AI systems but also the governments and public institutions interested in explaining the black-box behavior of AI systems to constitute public safety. The first example of AI regulation in terms of explainability in the last decade is DARPA’s XAI program from the program managers' and evaluator's perspective [2]. Offered in 2017, the program took place between 2017 and 2021. The program's main goal was to create machine learning techniques that “produce more explainable models” and “enable human users to understand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners.” The program mainly focuses on developing explainable learners, understanding human psychology to improve explainability, and evaluating XAI techniques. With the studies conducted with the participation of several research groups and many users contributing to evaluation procedures, many notable insights about XAI have been gained (stated in detail in [2]).

Governmental Regulations

 

[EU] With the rise of the popularity of AI systems, governments are also acting to regulate AI systems for transparency and accountability to reduce potential risks, including XAI studies and policies [3]. The EU's regulations on XAI reflect a commitment to a legal framework that upholds the EU's values of fundamental rights, ensuring AI systems are trustworthy and safe. The EU's approach, highlighted in its 2018 AI strategy, initially emphasized innovation while safeguarding citizen rights, advocating for AI transparency, accountability, and human oversight. The General Data Protection Regulation (GDPR) introduced the notion of a "right to an explanation" for decisions made by automated systems, though its implementation sparked debate regarding its vagueness and practical applicability. In 2019, the Ethics Guidelines for Trustworthy AI by the High-Level Expert Group on Artificial Intelligence (HLEG) further developed the concept of "explicability" as essential for user trust, distinguishing between technical explainability and the broader understanding of AI systems' purposes and decisions.

 

Moving forward, in February 2024, all 27 EU Member States unanimously approved the penultimate draft of the EU AI Act (AIA). The official vote for adoption is tentatively set for April 2024. According to [4], Explainable AI (XAI) is vital for meeting the EU AI Act's compliance demands, especially for high-risk AI applications, by ensuring these systems are transparent, understandable, and accountable. The Act requires transparency about AI interactions, even for "limited risk" AI uses, like customer service bots. Beyond regulatory compliance, XAI also offers businesses the advantage of better understanding and refining their AI systems, thus improving customer trust and aligning with company and regulatory expectations. This makes XAI a key factor for compliance, enhancing business performance and customer satisfaction across all risk categories of AI applications.

 

[US] The importance of XAI is also highlighted in the “Executive Order on Artificial Intelligence” signed by the US president on October 2023, emphasizing two main areas: transparency and explainability of AI models. Regulatory bodies are focusing on ensuring that AI models used by entities are transparent, meaning that these models' inner workings and data processing methods should be clear and understandable. Moreover, entities must be able to explain their AI models’ decisions and processes, ensuring that the rationale behind AI actions is understandable and justifiable. This focus aims to build trust in AI technologies by making them more accountable and understandable to regulators and the public.

 

[UK] The first AI Safety Summit took place in the UK on November 2023, convening officials from 28 countries, leaders from the AI industry, and prominent researchers, focusing on developing and overseeing AI technologies. The declaration published after the summit [5] indicates the need for XAI: “We welcome relevant international efforts to examine and address the potential impact of AI systems in existing fora and other relevant initiatives, and the recognition that the protection of human rights, transparency and explainability, fairness, accountability, regulation, safety, appropriate human oversight, ethics, bias mitigation, privacy and data protection needs to be addressed.”, also stressing its urgency.

 

Conclusion

 

XAI makes AI's complex decisions transparent and understandable, ensuring trust, accountability, and ethical integrity. It benefits users, affected individuals, and developers alike, addressing the need for clarity in AI operations across various fields. With global initiatives and regulatory frameworks from entities like DARPA, the EU, and national governments highlighting XAI's significance, the push for explainable AI reflects a broader commitment to ethical technology use. As AI becomes increasingly integrated into everyday life, prioritizing XAI is essential for maintaining responsible innovation and safeguarding human values.

 

References

[1] Xu, Feiyu, et al. "Explainable AI: A brief survey on history, research areas, approaches and challenges." Natural Language Processing and Chinese Computing: 8th CCF International Conference, NLPCC 2019, Dunhuang, China, October 9–14, 2019, Proceedings, Part II 8. Springer International Publishing, 2019.

[2] Gunning, David, et al. "DARPA’s explainable AI (XAI) program: A retrospective." Authorea Preprints (2021).

[3] Nannini, Luca, Agathe Balayn, and Adam Leon Smith. "Explainability in AI policies: a critical review of communications, reports, regulations, and standards in the EU, US, and UK." Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency. 2023.

[4] https://positivethinking.tech/insights/navigating-the-eu-ai-act-how-explainable-ai-simplifies-regulatory-compliance/

[5] https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023