Personal tools

Trust and Ethics in AI

Princeton University_012122A
[Princeton University, Blair Arch]
 

- Overview

Trust and ethics in artificial intelligence (AI) is a broad topic that touches many levels. Some principles guiding the ethics of AI include:

  • Fairness: AI systems should treat all employees fairly and never affect similarly situated employees or employee groups in different ways
  • Inclusiveness: AI systems should empower everyone
  • Transparency
  • Accountability
  • Privacy
  • Security
  • Reliability
  • Impartiality: Do not create or act according to bias, thus safeguarding fairness and human dignity
  • Societal and environmental well-being
  • Technical robustness

 

Other principles relevant to AI ethics include:

  • Diversity 
  • Non-discrimination
  • Control over one's data
  • The ability to guide machines as they learn

 

- Ethics for Trustworthy AI

Trustworthy artificial intelligence (AI) is based on seven technical requirements across three pillars, which should be met throughout the system's lifecycle: it should be (1) legal, (2) ethical, and (3) robust. Both come from a technical and social perspective. 

However, achieving truly trustworthy AI requires a broader view that includes the trustworthiness of all processes and actors in the system lifecycle and considers previous aspects from different perspectives. 

The more comprehensive vision considers four fundamental axes: global principles for the ethical use and development of AI-based systems, a philosophical perspective on AI ethics, a risk-based approach to AI regulation, and the aforementioned pillars and requirements. 

The seven requirements are analyzed from three perspectives (human agency and oversight; robustness and security; privacy and data governance; transparency; diversity, non-discrimination and equity; social and environmental well-being; and accountability):

  • What are each of the requirements for AI.
  • Why it is needed.
  • How to implement each requirement in practice.

 

- EU and AI

Artificial intelligence (AI) can help find solutions to many of society’s problems. This can only be achieved if the technology is of high quality, and developed and used in ways that earns peoples’ trust. Therefore, an EU strategic framework based on EU values will give citizens the confidence to accept AI-based solutions, while encouraging businesses to develop and deploy them.

This is why the European Commission (EU) has proposed a set of actions to boost excellence in AI, and rules to ensure that the technology is trustworthy. 

The Regulation on a European Approach for Artificial Intelligence and the update of the Coordinated Plan on AI will guarantee the safety and fundamental rights of people and businesses, while strengthening investment and innovation across EU countries.  

 

- Explainable AI

Explainable AI is a set of tools and frameworks to help you understand and interpret predictions made by your machine learning models, natively integrated with a number of Google's products and services. With it, you can debug and improve model performance, and help others understand your models' behavior.

Explainable artificial intelligence (XAI) is an emerging field of research that brings transparency to highly complex and opaque machine learning (ML) models. In recent years, various techniques have been proposed to explain and understand machine learning models, which were previously widely considered black boxes (e.g., deep neural networks), and to validate their predictions. 

Surprisingly, the prediction strategies of these models sometimes prove to be somehow flawed and inconsistent with human intuition, for example, due to bias or spurious correlations in the training material. 

Recent efforts in the XAI community aim to move beyond merely identifying these flawed behaviors to integrating explanations into the training process to improve model efficiency, robustness, and generalization.

 

- Responsible AI

Responsible AI is a set of principles and regulations that guide how artificial intelligence (AI) is developed, deployed, and governed. It is also known as ethical or trustworthy AI.

The goal of responsible AI is to use AI in a safe, trustworthy, and ethical way. It can help reduce issues such as AI bias and increase transparency.

Some principles of responsible AI include:

  • Fairness: AI systems should be built to avoid bias and discrimination.
  • Transparency: AI systems should be understandable and explainable to both the people who make them and the people who are affected by them.
  • Accountability: This means being held responsible for the effects of an AI system. It involves transparency, or sharing information about system behavior and organizational process. It also involves the ability to monitor, audit, and correct the system if it deviates from its intended purpose or causes harm.
  • Empathy: This is one of the four foundations of responsible AI.


Other principles of responsible AI include:

  • Privacy and Security
  • Inclusive Collaboration

 

[More to come ...]


Document Actions