Personal tools

AI Environments

AI_Agent_and_Environment_080320A
[An AI Agent Interacting with an Environment]



- Overview

An AI environment is the external context in which AI agents operate. It provides the necessary stimuli and feedback for agents to perceive and interact with the world. 

There are three basic types of environments in AI: Physical, Virtual, Simulated. Each category has a particular function and raises particular issues for AI system development. 

Other types of environments in AI include: 

  • Discrete vs continuous: If there are a limited number of distinct, clearly defined, states of the environment, the environment is discrete. For example, chess. Otherwise it is continuous.
  • Episodic vs non-episodic: In an episodic environment, an agent's current action will not affect a future action. In a non-episodic environment, an agent's current action will affect a future action. 
  • Competitive vs collaborative: In a competitive environment, each agent's success is directly tied to the failure of others, and the agents must compete against each other to achieve their objectives. For example, games like chess. In a collaborative environment, multiple agents are working together to achieve a common goal.

Other types of environments in AI include: 

  • Fully observable vs partially observable
  • Deterministic vs stochastic

 

- Types of AI Environments

There are several aspects that dintuiguish AI environments. The shape and frequency of the data, the nature of the problem, the volume of knowledge available at any given time are some of the elements that differentiate one type of AI environment from another. 

Understanding the characteristics of the AI environment is one of the first tasks that AI practitioners focused on in order to tackle a specific AI problem. 

From that perspective, there are several categories we use to group AI problems based on the nature of the environment.  

  • Complete vs. Incomplete: Complete AI environments are those on which, at any give time, we have enough information to complete a branch of the problem. Chess is a classic example of a complete AI environment. Poker, on the other hand, is an incomplete environments as AI strategies can’t anticipate many moves in advance and, instead, they focus on finding a good "equilibrium” at any given time.
  • Fully observable and partially observable: An agent’s sensors give it access to the complete state of the environment at each point in time, if fully observable, otherwise not. A fully observable AI environment has access to all required information to complete target task. Image recognition operates in fully observable domains. Partially observable environments such as the ones encountered in self-driving vehicle scenarios deal with partial information in order to solve AI problems.
  • Deterministic vs. Stochastic: The next state of the environment is completely determined by the current state and the action executed by the agent. Stochastic environment is random in nature and cannot be completely determined. For example, 8-puzzle has a deterministic environment, but self-driving car does not. Deterministic AI environments are those on which the outcome can be determined base on a specific state. In other words, deterministic environments ignore uncertainty. Most real world AI environments are not deterministic. Instead, they can be classified as stochastic. Self-driving vehicles are a classic example of stochastic AI processes.
  • Static vs. Dynamic: The static environment is unchanged while an agent is deliberating. A dynamic environment, on the other hand, does change. Backgammon has static environment and a roomba has dynamic. Static AI environments rely on data-knowledge sources that don’t change frequently over time. Speech analysis is a problem that operates on static AI environments. Contrasting with that model, dynamic AI environments such as the vision AI systems in drones deal with data sources that change quite frequently.
  • Discrete vs. Continuous: A limited number of distinct, clearly defined perceptions and actions, constitute a discrete environment. Discrete AI environments are those on which a finite [although arbitrarily large] set of possibilities can drive the final outcome of the task. Chess is also classified as a discrete AI problem. Continuous AI environments rely on unknown and rapidly changing data sources. Vision systems in drones or self-driving cars operate on continuous AI environments.
  • Single agent and Multi-agent: An agent operating just by itself has a single agent environment. However if there are other agents involved, then it’s a multi agent environment. Self-driving cars have multi agent environment.
  • Episodic/Non-episodic: In an episodic environment, each episode consists of the agent perceiving and then acting. The quality of its action depends just on the episode itself. Subsequent episodes do not depend on the actions in the previous episodes. Episodic environments are much simpler because the agent does not need to think ahead. 
  • Known vs Unknown: In a known environment, the output for all probable actions is given. Obviously, in case of unknown environment, for an agent to make a decision, it has to gain knowledge about how the environment works.

There are mainly eight groups of environment and an environment can be in multiple groups. 



[More to come ...]

 

 

 

 

 

Document Actions