Personal tools

Trust in AI

Harvard_University_120321A
[Harvard University]
 

- Overview

Trustworthy AI depends on accountability, and a prerequisite for accountability is transparency. Transparency refers to the extent to which individuals interacting with an AI system have access to information about the AI ​​system and its outputs.

Trust in AI can be said to exist when humans have certain expectations for the behavior of artificial intelligence (AI) without regard to the intentions or ethics of the artificial agent.

Just like humans, AI systems make mistakes. For example, a self-driving car might mistake a white tractor-trailer crossing a highway for the sky. But to be trustworthy, AI needs to be able to identify these errors before it's too late.

While generative AI models can synthesize new images, videos, text, and language, humans are better at imagining “what if” scenarios. But through what we call self-supervised learning, AI systems are getting smarter and they are learning to make this connection.

  

- Building Trust in AI

Only 20% of U.S. consumers “mostly” or “completely” trust AI, according to a Dunnhumby study. A CNBC survey found that only three in 10 adults said they would be interested AI tools to help manage their money.

A global study by KPMG shows that in 2023, 34% of UK respondents said they were somewhat, mostly or completely willing to trust the use of artificial intelligence (AI). 

The adoption of AI continues to expand its role in our lives, and important questions arise about the degree to which we can - and should - trust AI. 

Building trust between humans will become increasingly complex - but there are key ways to improve AI governance and sustainability to ensure the technology remains trustworthy. Trustworthy AI can bring many benefits, such as better healthcare, safer and cleaner transportation, more efficient manufacturing, and cheaper and more sustainable energy. 

Artificial intelligence (AI) can help find solutions to many social problems. This can only be achieved if the technology is of high quality, developed and used in a way that earns people's trust. 

 

- Trustworthy Enterprise AI Systems

In recent years, many successful applications of AI have been established, mainly because of the fusion of improved algorithms, enormous computing power, and massive amounts of data. This provides AI systems with human-level perception capabilities suitable for machine learning approaches, such as speech-to-text, text understanding, image interpretation, and more. 

These capabilities make it possible to deploy AI systems in real-life scenarios that are often highly uncertain. Still, the current consumer-facing AI applications that deliver services to users — from navigation systems to voice-activated “smart” homes  - barely scratch the surface of the enormous opportunities AI presents for businesses and other institutions. 

The main purpose of so-called enterprise AI is to empower humans and allow them to make better decisions, i.e. smarter and more informed decisions. At this point, AI and humans have very complementary capabilities, and it is only by combining their capabilities that we will find the best results. 

Typical applications in enterprise AI are decision support systems for physicians, educators, financial services operators, and many other professionals who need to make complex decisions based on large amounts of data.

 

- A Problem of Trust

It is easy to see that AI will be ubiquitous in our daily lives. This is sure to bring many benefits in terms of scientific progress, human well-being, economic value, and exploring possibilities to solve major social and environmental problems. 

However, such a powerful technology also raises some concerns, such as its ability to make important decisions in ways that humans consider fair, to be aware of and consistent with human values ​​relevant to the problem being solved, and to be able to explain its reasoning and decisions. 

Since many successful AI technologies rely on large amounts of data, it is important to understand how AI systems and those who generate them process data. 

These concerns are among the hurdles holding back AI development, or worrying current AI users, adopters, and policymakers. 

Issues include the black-box nature of some AI approaches, the discriminatory decisions that AI algorithms may make, and accountability and responsibility when AI systems are involved in poor outcomes. 

Without answers to these questions, many people will not trust AI and thus neither fully adopt it nor take advantage of its positive capabilities. 

According to a new study by the IBM Institute for Business Value, 82 percent of businesses and 93 percent of high-performing businesses are now considering or continuing to adopt artificial intelligence because of the technology's ability to increase revenue, improve customer service, reduce costs and manage risk. 

However, according to the same study, despite their awareness of the technology’s vast benefits, 60% of these companies are concerned about liability and 63% say they lack the skills to harness the potential of AI. 

A Magical Night in Istanbul_Turkey_050321A
[A Magical Night in Istanbul, Turkey - Civil Engineering Discoveries]

- Building Trust through the Legal Framework on AI

The EU is proposing new rules to make sure that AI systems used in the EU are safe, transparent, ethical, unbiased and under human control. Therefore they are categorised by risk: 

 

Category 1: Unacceptable

Anything considered a clear threat to EU citizens will be banned: from social scoring by governments to toys using voice assistance that encourages dangerous behaviour of children.

 

Category 2: High Risk

  • Critical infrastructures (e.g. transport), that could put the life and health of citizens at risk
  • Educational or vocational training, that may determine the access to education and professional course of someone’s life (e.g. scoring of exams)
  • Safety components of products (e.g. AI application in robot-assisted surgery)
  • Employment, workers management and access to self-employment (e.g. CV sorting software for recruitment procedures)
  • Essential private and public services (e.g. credit scoring denying citizens opportunity to obtain a loan)
  • Law enforcement that may interfere with people’s fundamental rights (e.g. evaluation of the reliability of evidence)
  • Migration, asylum and border control management (e.g. verification of authenticity of travel documents)
  • Administration of justice and democratic processes (e.g. applying the law to a concrete set of facts)

They will all be carefully assessed before being put on the market and throughout their lifecycle.

 

Category 3: Limited Risk

AI systems such as chatbots are subject to minimal transparency obligations, intended to allow those interacting with the content to make informed decisions. The user can then decide to continue or step back from using the application.

 

Category 4: Minimal Risk

Free use of applications such as AI-enabled video games or spam filters. The vast majority of AI systems falls into this category where the new rules do not intervene as these systems represent only minimal or no risk for citizen’s rights or safety.

 

[More to come ...]



Document Actions