Personal tools

Trust and Ethics in AI

Princeton University_012122A
[Princeton University, Blair Arch]
 

- Overview

Trust and ethics in artificial intelligence (AI) is a broad topic that touches many levels. Some principles guiding the ethics of artificial intelligence include:

  • Fairness: AI systems should treat all employees fairly and never affect similarly situated employees or employee groups in different ways
  • Inclusiveness: AI systems should empower everyone
  • Transparency
  • Accountability
  • Privacy
  • Security
  • Reliability
  • Impartiality: Do not create or act according to bias, thus safeguarding fairness and human dignity
  • Societal and environmental well-being
  • Technical robustness

Other principles relevant to AI ethics include:

  • Diversity 
  • Non-discrimination
  • Control over one's data
  • The ability to guide machines as they learn

 

- Ethics for Trustworthy AI

Trustworthy artificial intelligence (AI) is based on seven technical requirements across three pillars, which should be met throughout the system's life cycle: it should be (1) legal, (2) ethical, and (3) robust, both Both come from a technical and social perspective. 

However, achieving truly trustworthy AI involves a broader vision that includes the trustworthiness of all processes and actors that are part of the system’s lifecycle, and considering previous aspects from a different perspective.

The more comprehensive vision considers four fundamental axes: global principles for the ethical use and development of AI-based systems, a philosophical view of AI ethics, a risk-based approach to AI regulation, and the pillars and requirements mentioned above.

Seven requirements are analyzed from three perspectives (human agency and oversight; robustness and security; privacy and data governance; transparency; diversity, non-discrimination and equity; social and environmental well-being; and accountability): Trustworthy What each requirement of artificial intelligence is, why it is needed, and how to implement each requirement in practice.

On the other hand, a pragmatic approach to implementing trustworthy AI systems allows the definition of a legally oriented concept of liability for AI-based systems through a given audit process. Responsible AI systems are therefore the final concept we introduce in this work, and a very necessary concept that can be achieved through the audit process, but face the challenges that come with using regulatory sandboxes.

Our multidisciplinary vision for trustworthy AI ultimately sparks a debate about the different perspectives recently published on the future of AI. Our reflection on this matter concludes that regulation is key to consensus on these views and that trustworthy and accountable AI systems are critical to the present and future of our society.

 

- EU and AI

Artificial intelligence (AI) can help find solutions to many of society’s problems. This can only be achieved if the technology is of high quality, and developed and used in ways that earns peoples’ trust. Therefore, an EU strategic framework based on EU values will give citizens the confidence to accept AI-based solutions, while encouraging businesses to develop and deploy them.

This is why the European Commission (EU) has proposed a set of actions to boost excellence in AI, and rules to ensure that the technology is trustworthy. 

The Regulation on a European Approach for Artificial Intelligence and the update of the Coordinated Plan on AI will guarantee the safety and fundamental rights of people and businesses, while strengthening investment and innovation across EU countries.  

 

[More to come ...]


Document Actions