Personal tools

AI Ethics

University_of_Michigan_Law_School_090920A
[University of Michigan Law School]

 

- Overview

Artificial Intelligence (AI) Ethics is a multidisciplinary field that studies how to optimize the beneficial effects of artificial intelligence while reducing risks and adverse outcomes.

AI ethics is a set of guidelines that provide advice on the design and outcomes of artificial intelligence. They aim to ensure that AI technologies are developed and used responsibly. This means taking a safe, reliable, humane and environmentally friendly approach to artificial intelligence.

AI ethics include: Safety, security, human concerns, environmental considerations, individual rights, privacy, non-discrimination, non-manipulation.

Examples of ethical AI include:

  • Botanists use AI to create smart greenhouses.
  • Machine learning researchers explore the possibility of using AI to detect abnormalities on rail tracks.

Examples of unethical AI include: 

  • Amazon’s gender-biased hiring algorithm.
  • Facial recognition technology proves less accurate for people with darker skin tones.

As AI becomes an integral part of products and services, organizations are beginning to develop AI ethics guidelines.

 

- Major AI Ethics Challenges

There are many ethical challenges:

  • Lack of transparency in AI tools: AI decisions are not always understandable to humans.
  • AI is not neutral: AI-based decisions are prone to inaccuracies, discriminatory outcomes, and embedded or inserted biases.
  • Surveillance practices for data collection and privacy of court users.
Here are some of the big ethical issues with AI:
  • Privacy: Artificial intelligence needs data to train, but where does the data come from and how is it used?
  • Bias and discrimination: AI systems may perpetuate or amplify social biases. To minimize discrimination, it’s important to train AI systems with diverse and inclusive data.
  • Transparency: AI decisions are not always understandable to humans. AI developers have an ethical obligation to be transparent in a structured, accessible way.
Other ethical issues with AI include:
  • Worry that AI may replace human jobs
  • Use AI to deceive or manipulate
  • AI tools lack transparency
  • Monitoring Practices for Data Collection
  • Manipulative behavior
  • Risk assessment and supervision

 

- Trust in AI

Trust in artificial intelligence (AI) is a relationship between two parties where the trusting party believes the promises made by the trusted party. Trust in AI can be built by understanding how AI systems make decisions and recommendations. This understanding gives people a sense of predictability and control. 

However, AI is not perfect and can be unreliable for several reasons, including: Data reliability, Data size, Security, Privacy, Hardware limitations. For example, a self-driving car might mistake a white tractor-trailer truck crossing a highway for the sky. To be trustworthy, AI needs to be able to recognize those mistakes before it is too late. 

According to a Gallup survey, 79% of Americans have little or no trust that businesses will use AI responsibly. Only 21% of respondents said they trusted businesses with AI "a lot" or "some".

 

- AI Code of Ethics

As AI proliferates, and governments try to keep up with the fast-moving technology both structurally and legally, AI ethics has become a key topic that everyone should be aware of. So, what is AI ethics and why is it important?

AI ethics is a set of moral principles and technical systems designed to inform the development and responsible use of AI technologies. As AI has become an integral part of products and services, organizations are starting to develop codes of ethics for AI.

The AI Code of Ethics, also known as the AI Value Platform, is a policy statement that formally defines the role of artificial intelligence in the continued development of humanity. The purpose of the AI Code of Ethics is to provide guidance to stakeholders as they face ethical decisions about the use of AI.

 

- Areas of AI Ethics

In general, AI ethics is an umbrella term for a set of considerations for responsible AI that incorporates safety, security, human concerns, and environmental considerations. Some areas of AI ethics include:

  • Avoid AI bias. Because AI learns from data, a poorly constructed AI can (and does) exhibit a bias towards poorly represented subsets of data. In particular, poorly trained AI may exhibit bias against minority and underrepresented groups. Well-known cases of bias, such as in recruiting tools and chatbots, have embarrassed well-known corporate brands and created legal risks.
  • AI and privacy. AI relies on information to learn. A large portion of this information comes from users. Not all users are aware of what information is being collected about them and how that information is used to make decisions that affect them. Even today, everything from internet searches to online purchases to social media reviews can be used to track, identify and personalize the user experience. While this can be positive (e.g. AI recommending products a user might want), it can also lead to unintended bias (e.g. certain offers go to some consumers but not others).
  • Avoid AI mistakes. A poorly constructed AI can make mistakes that can lead to anything from loss of income to death. Sufficient testing is needed to ensure that AI does not pose a risk to humans or their environment. 
  • Manage the impact of AI on the environment. AI models are getting bigger every day, with recent models each having more than a trillion parameters. These large models consume a lot of energy to train—making AI a resource hog. Researchers are developing energy-efficient AI techniques that balance performance and energy efficiency.

 

AI Ethics: Take Control of AI Systems

Legal and ethical issues facing society due to AI include privacy and surveillance, bias or discrimination, while the underlying philosophical challenge is the role of human judgment. Due to the use of newer digital technologies, concerns have arisen that it will become a new source of inaccuracies and data breaches.

The convergence of the availability of massive amounts of big data, the speed and extension of cloud computing platforms, and advances in sophisticated machine learning algorithms have spawned a range of innovations in artificial intelligence (AI). Indeed, the benefits of AI systems to society are enormous, as are the challenges and concerns. 

AI systems may be able to get things done quickly, but that doesn't mean they always get things done fairly. If the dataset used to train the machine learning model contains biased data, the system may exhibit the same bias when making decisions in practice. For example, if a dataset contains mostly images of white men, a facial recognition model trained on that data may be less accurate for women or people of different skin tones.

The success of any AI application is intrinsically tied to its training data. Not only do you need the right data quality and the right amount of data; you must also proactively ensure that your AI engineers don’t pass on their own potential biases to their creations. If engineers allow their own worldviews and assumptions to influence datasets -- perhaps providing data limited to certain demographics or focuses -- applications that rely on AI to solve problems will be just as biased, inaccurate, and, well, less so it works.

 

- Identify AI Bias

AI bias comes in many forms. Cognitive biases originating from human developers can affect machine learning models and training datasets. Essentially, bias is hard-coded into the algorithm. Incomplete data can itself be biased -- especially if information is missing due to cognitive biases. 

When AI trained and developed without bias is put into use, its results can still be affected by deployment bias. Aggregation bias is another risk that occurs when small choices made throughout an AI project have a large collective impact on the integrity of the results. In short, any AI recipe has many inherent steps in which biases can arise.

 

- AI Ethics: Maximize AI, Minimize Risk

The ever-evolving technology learning curve means miscalculations and mistakes that can lead to unintended harmful effects. AI ethics includes a set of values, principles, and techniques that employ widely accepted standards of right and wrong to guide ethical behavior in the development and deployment of AI technologies. 

We live in a time of paramount importance, and problems in AI systems that can cause harm must be quickly identified and addressed. Therefore, identifying potential risks posed by AI systems means planning for countermeasures that must be implemented as soon as possible. 

As a result, public sector organizations can develop and implement ethical, fair, and safe AI systems by creating a culture of responsible innovation that anticipates and prevents potential future hazards. That said, everyone involved in the design, production, and deployment of AI projects, including data scientists, data engineers, domain experts, delivery managers, and department heads, should prioritize AI ethics and safety.

 

Grindelwald_Switzerland_060522A
[Grindelwald, Switzerland]

Legal and Ethical Consideration in AI

AI ethics is a branch of technology ethics specific to AI systems. In machine ethics, it is sometimes divided into concerns about the ethical behavior of humans in the design, manufacture, use, and processing of AI systems, and concerns about the behavior of machines. It also includes possible singularity problems due to superintelligent AI. 

As AI takes on a greater decision-making role in more industries, ethical concerns also increase. AI raises three main areas of social ethical concern: privacy and surveillance, bias and discrimination, and perhaps the most profound and difficult philosophical question of our time, the role of human judgment.

 

- Area 1: Robot Ethics

Robot ethics is a growing interdisciplinary research effort, roughly at the intersection of applied ethics and robotics, aimed at understanding the ethical implications and consequences of robotics, especially autonomous robots. Researchers, theorists, and scholars from diverse fields including robotics, computer science, psychology, law, philosophy, and more are working on pressing ethical questions about the development and deployment of robotics in society. 

Many areas of robotics are affected, especially where robots interact with humans, from elder care and medical robots, to robots used in a variety of search and rescue missions, including military robots, to various service and entertainment robots . 

While military robots were initially the main focus of discussion (eg, whether and when autonomous robots should be allowed to use lethal force, whether they should be allowed to make these decisions autonomously, etc.), in recent years other types of influence especially robots, social robots have also become an increasingly important topic.

 

- Area 2: Tackling The Biases in AI Systems

Over the past few years, society has begun to ponder the extent to which human biases can find their way into AI systems—with detrimental consequences. At a time when many companies look to deploy AI systems in their operations, it is imperative to be acutely aware of these risks and work to mitigate them. What can CEOs and their top management teams do to lead on bias and fairness? Among them, we see six essential steps: 

  • First, business leaders need to keep up with the latest developments in this rapidly evolving field of research. 
  • Second, establish responsible processes that mitigate bias when your business or organization is deploying AI. Consider using a combination of technology tools and operational practices such as internal "red teaming" or third-party audits. 
  • Third, have fact-based conversations around potential human biases. This can take the form of running algorithms with human decision makers, comparing results, and using "interpretability techniques" to help determine what caused the model to make a decision - to understand why there might be differences. 
  • Fourth, consider how humans and machines can work together to mitigate bias, including human-in-the-loop processes. 
  • Fifth, invest more, provide more data, and take a multidisciplinary approach to bias research (while respecting privacy) to continue advancing the field. 
  • Sixth, invest more in diversification in the AI ​​field itself. A more diverse AI community will be better able to predict, scrutinize and detect bias, and engage affected communities. 

 

- Area 3: Algorithmic Biases

Algorithmic bias reflects the vulnerability of "so perfect" AI systems. The lack of fairness caused by computer system performance is algorithmic bias. In algorithmic biases, the mentioned lack of justice occurs in different ways, but can be interpreted as a set of biases differentiated based on specific categories.

Human bias is an issue that has been well-studied in psychology for many years. It stems from implicit associations, reflecting biases we are unaware of and how it affects the outcome of events. Over the past few years, society has begun grappling with the extent to which these human biases can find their way through artificial intelligence systems, with devastating consequences. 

As many companies are looking to deploy AI solutions, it is imperative to be deeply aware of these threats and seek to minimize them. Algorithmic bias in AI systems can take many forms, such as gender bias, racial bias, and ageism.

The role of data imbalance is crucial in introducing bias. For example, in 2016, Microsoft released an AI-based conversational chatbot on Twitter that was supposed to engage with people through tweets and direct messages. However, it began replying to highly offensive and racist messages within hours of posting. 

The chatbot was trained on anonymized public data and had built-in internal learning, which led to a coordinated attack by a group of people, introducing racist bias into the system. Some users were able to inundate the bot with misogynistic, racist and anti-Semitic language. The event was an eye-opener to a wider audience about the possible negative effects of unfair algorithmic bias in AI systems.

 

 

[More to come ...]



Document Actions