Personal tools

Responsible and Explainable AI

Brazil_03282010A
(Brazil - Hsi-Pin Ma)


- Responsible AI

Trustworthy artificial intelligence (AI) has three components: 

  • Lawfulness: Compliant with all applicable laws and regulations
  • Ethics: Adheres to ethical principles and values
  • Robustness: Both technical and social


Trustworthy AI systems have many characteristics, including:

  • Validity and reliability
  • Safety, security, and resilience
  • Accountability and transparency
  • Explainability and interpretability
  • Privacy-enhanced
  • Fair with harmful bias managed


Some core principles for the ethics of AI include: 

  • Proportionality and do no harm
  • Safety and security
  • Right to privacy and data protection
  • Multi-stakeholder and adaptive governance and collaboration
  • Responsibility and accountability
  • Transparency and explainability


A strong AI code of ethics can include: 

  • Avoiding bias
  • Ensuring privacy of users and their data
  • Mitigating environmental risks

 

- Explainable AI

Explainable AI (XAI) is a set of methods and processes that helps users understand and trust the results created by machine learning (ML) algorithms. XAI is programmed to explain its rationale, purpose, and decision-making process in a way that the average person can understand. 

XAI can help human users understand the reasoning behind ML algorithms and AI. It can also help users debug and improve model performance, and help others understand the behavior of their models. 

XAI is sometimes referred to as "white box" models. This means that users can understand the rationale behind its decisions. 

XAI is different from the concept of the "black box" in machine learning. In machine learning, even designers cannot explain why the AI arrived at a specific decision. 

In the healthcare domain, researchers have identified explainability as a requirement for AI clinical decision support systems. This is because the ability to interpret system outputs facilitates shared decision-making between medical professionals and patients.

 

- Goals of Explainable AI

Explainable AI pursues multiple goals, including transparency, causality, privacy, fairness, trust, availability, and reliability. 

In this regard, 

  • Transparency helps us understand how a system makes a specific decision. 
  • Causality assesses the extent to which model variables are related to each other.
  • Privacy indicates whether external agents have access to the original training data.
  • Fairness is an indicator for assessing the degree of bias aversion or ethical discrimination in a learning model.
  • Trust is a measure of the degree of confidence in a model's performance when facing problems. 
  • Usability shows the system's ability to interact safely and effectively with users; Reliability measures the consistency of learning model results under similar conditions.

 

- Responsible AI vs. Explainable AI

Responsible AI is a set of practices that ensure AI systems are designed, deployed, and used ethically and legally. Responsible AI focuses on ethical principles that guide AI development and deployment, ensuring fairness, accountability, and transparency. 

Explainable AI is a set of tools and frameworks that help users understand and interpret predictions made by machine learning models. XAI provides tools to understand the “black box” of complex AI models, making their decision-making processes transparent and interpretable.  

Explainable AI is considered a building block for responsible AI, with most literature considering it as a solution for improved transparency.

 

[More to come ...]


Document Actions