Plan Generation

A plan is a sequence of actions that can transform an initial configuration of a problem into a goal configuration.

In a search tree, a plan is represented with the labels of arcs along a path from start node to a goal node.

Components of a Planning System

In any general problem solving system, elementary techniques to perform the following functions are required:

  • choose the best rule (based on heuristics) to be applied

  • apply the chosen rule to get a new problem state

  • detect when a solution has been found

  • detect dead ends so that new directions are explored

Steps for Plan Generation using SSS

Steps for plan generation using state space search (SSS):

  1. Apply SSS to generate a plan

  2. Execute the plan

Issues and Limitations of Plan Generation using SSS

Problems where plan generation with State Space Search may cause issues:

  • Control of a mobile robot

  • Optimization of operation in factories

  • Playing board games

Limitations of Plan Generation:

Plan generation may be limited by:

  • Uncertainties in

    • Perceptual processes

      • Perceptual processes may not be immune to noise and may be insensitive to some important features of the environment

    • Effector systems

      • Effector systems occasionally make errors in executing actions

    • The environment

      • Other physical processes or agents in the world may interfere during plan execution (e.g. games with adversaries).

  • Scarcity in computational resources

    • During the time a plan is searched, the world may be changed due to the existence of external effects.

    • It may be required to act on the world before a search to the goal state is completed.

    • Even if time allowed for planning is sufficient, available computational memory resources may be exhausted before the search to a goal state is complete.

Uncertainty

In case of knowledge representation, if we write \(A \rightarrow B\), it means that if \(A\) is true, then \(B\) is true. However, consider a situation where we are not sure about whether \(A\) is true or not. In such cases, we can’t express this statement. This situation is called uncertainty.

Causes of Uncertainty

Uncertainty may arise due to:

  • information received from unreliable sources

  • experimental errors

  • equipment failures

  • temperature variations

  • climate change

Uncertainty Management Approaches

There are two major approaches to deal with uncertainty:

  • Probabilistic methods

  • Sense/Plan/Act architecture (developed by Nilsson)

Probabilistic Methods

Probabilistic methods are widely used in AI to handle uncertainty which is inherent in many real world problems.

These methods allow AI systems to make decisions or predictions when the available information is incomplete, noisy or ambiguous.

This approach is fundamental in creative intelligent systems that can operate effectively in complex real world environments, where information is often incomplete or noisy.

Probabilistic methods may be applied to formalize perceptual, environmental and effector uncertainties.

Confronting Actions with Uncertain Effects

In many real world scenarios, an agent’s actions do not always lead to deterministic outcomes, but rather, have probabilistic consequences.

Probability distributions help model these uncertainties and provide a way for the agent to reason about and plan under uncertainty.

When an agent takes an action, the outcome may not be certain. For example, in a robot navigation task, the robot may not always reach its intended destination due to obstacles, slippage or sensor errors. Probability distributions are used to model the range of possible outcomes and their associated likelihoods.

  • For each action \(a\) on a given state \(s\), the agent might transition to a new state \(s'\) but this transition is probabilistic.

  • The transition from one state to another after taking an action can be described by a probability distribution, \(P(s' | s, a)\), which gives the probability of reaching state \(s'\), given that the agent was in state \(s\) and took action \(a\).

@TODO: add image of probability distribution vs state graph

Example: Suppose a robot takes an action \(a\) (say \(a = \) "moving forward") on its current state, \(s\). In that case, the outcome may be:

  1. with 70% probability, it moves to the desired position \(s_1\)

  2. with 20% probability, it slightly deviates from the desired position and reaches \(s_2\)

  3. with 10% probability, it fails to reach the desired position and remains in the current state \(s_0\)

The uncertainty is captured by the probability distribution \(P(s' | s, a)\) as:

\[P(s' | s, a) = \{ P(s_1 | s, a) = 0.7, P(s_2 | s, a) = 0.2, P(s_0 | s, a) = 0.1 \}\]

Finding actions under such circumstances is called a Markov Decision Process (MDP) [Puterman, 1994]

Confronting Imperfect Perception

Probabilistic approach can be used/applied to confront imperfect perception in AI.

In many AI systems, especially relying on sensors (like camera, microphone, etc.), the data is often noisy, incomplete or distorted. A probabilistic approach allows us to represent this uncertainty explicitly.

Models such as Gaussian distributions, Bayesian networks are used to represent such uncertainties about sensorreadings and the underlying state of the world. Dealing with imperfect perception can be formalized by assuming that the agent’s sensory approaches provide a probability distribution over the set of states that it might be in.

@TODO: add image of input from environment affecting sensor

Finding actions in such a situation is called Partially Observable Markov Decision Process (POMDP) [Lovejoy, 1991] [Monahan, 1982] [Cassandra, Kaelbling & Littman, 1994]

Sense/Plan/Act Architecture

Sense/Plan/Act architecture is a foundational model for structuring intelligent agents in AI, developed by Nilsson.

The rationale for this architecture is that:

  • it provides continuous feedback from the environment to the agent while it is executing its plan

  • it is to deal with the difficulties arising out of

    • actions occasionally producing unanticipated effects

    • uncertainties in perceptual processes due to which the agent sometimes can’t decide which world state it is in

Sense/Plan/Act Algorithm

To provide continuous feedback from the environment to the agent, the following steps are followed:

  • Plan a sequence of actions

  • Execute just the first action in this sequence (initial action)

  • Sense the resulting environmental situation

  • Recalculate the start node

  • Repeat the process

Agents that select actions in this manner are said to be sense/plan/act agents

For this method to be effective, the time taken to compute a plan must be less than the time allowed for each action.

sense plan act

Environmental feedback in the sense/plan/act cycle allows resolution of some of the perceptual, environmental and effector uncertainties.

For feedback to be effective, however, we must assume that, on average, sensing and acting are accurate

Components of Sense/Plan/Act Architecture

This architecture divides the process of decision-making and interaction with the environment into three core components:

Sense (Perception)

This is the first step, where the agent collections information from the environment. The agent uses sensors or input devices to gather data about the world, such as visual, auditory or other sensory input. [To scan for obstacles or detecting objects]

To form an understanding or a representation of the environment, to be used to guide decisions, is the goal of this phase.

Example: A robot uses cameras, microphones or other sensors to collect data about its surroundings.

Plan (Reasoning)

Once the agent has sensed the environment, it needs to decide what actions to take

This involves reasoning which includes processing the sensory data and generating a plan based on the current state and the agent’s goal.

The goal of this reasoning or planning step is to produce a sequence of actions that will lead to the achievement of the agent’s goals.

Example: A robotic vacuum cleaner may calculate the most efficient path to clean a room based on the layout of the room and the location of obstacles.

Act (Execution)

After planning, the agent needs to act or execute the plan. This step involves taking actions in the environment through actuators or output devices.

The goal is to perform actions determined by the planning step in the real world

Example: A robot might use motors or servos to enable it to move in its desired direction.