Personal tools

Ethics of AI and Robotics

UC_Berkeley_101020A
[University of California at Berkeley]

 

- Overview

Machines mostly learn through a testing process of repeated trial and error. Give a machine lots of data, ask it to answer a question, and then tell the machine whether it gave a correct answer. After repeating this process millions of times (and getting things wrong a lot of the time), the machine gradually gets better at giving correct answers, finding patterns in the data, and working out how to solve increasingly complex problems. 

Machines can only learn based on the data they’re presented with - so what happens if there are problems with that data? We’ve already seen artificial intelligence (AI) systems that preference male over female candidates applying for technical jobs, or making a range of errors about people with dark skin tones, based on the biased data they were provided with.

We should think very carefully about what it is to be fair and what it means to make a good decision. What does it mean mathematically for a computer program to be fair - not to be racist, ageist, or sexist? 

These are challenging research questions that we're now facing, as we hand these decisions to machines. Those decisions might have impacts upon someone’s life: decide who gets a loan, who gets welfare, how much insurance we should pay, who goes to jail. 

We need to be careful about handing those decisions over to machines because those machines may capture the biases of society. That means there’s still plenty of work to do. 

We need to get better at knowing how to teach machines before giving them too much responsibility. Once we do, the benefits to society will be immense—and we can already see real-world examples of how AI is improving our lives.

 

- AI Ethics

AI ethics is a set of moral principles and techniques that help guide the development and responsible use of artificial intelligence. AI doesn't have morals or ethics because it doesn't have consciousness, emotions, or intentions. However, AI systems can be trained and designed to follow ethical principles and exhibit moral behavior. 

Some ethical considerations for AI include: 

  • Bias and discrimination
  • Transparency and accountability
  • Creativity and ownership
  • Social manipulation and misinformation
  • Privacy, security, and surveillance
  • Job displacement
  • Autonomous weapons

 

Other ethical standards for AI and robotics include:

  • Do no harm
  • Be free of bias and deception
  • Respect human rights and freedoms, including dignity and privacy
  • Promote well-being
  • Be transparent and dependable
  • Maintain accountability with their human designers

 

Some ethical principles for AI include: Transparency, Justice and fairness, Non-maleficence, Responsibility, Privacy, Beneficence, Freedom and autonomy, Trust, Sustainability, Dignity, Solidarity.

 

- Robot Ethics (Roboethics)

The term "robot ethics" (sometimes "roboethics") refers to the morality of how humans design, construct, use and treat robots. It considers both how artificially intelligent beings may be used to harm humans and how they may be used to benefit humans. 

AI and robotics are digital technologies that will have significant impact on the development of humanity in the near future. They have raised fundamental questions about what we should do with these systems, what the systems themselves should do, what risks they involve, and how we can control these. 

As we continue to develop machines with decision-making abilities that are comparable to those of a human mind, recognising and addressing these questions are more important than ever. 

Technology like AI will change society. It’s already becoming part of our lives. But society also gets to change technology. We need to work out how to make sure it improves the quality of everyone’s life.

  

 

[More to come ...]


Document Actions