Decision Analysis: Problem Formulation, Decision Making, and Risk Analysis

 
 
Chapter 8
Decision Analysis
 
n
Problem Formulation
n
Decision Making without Probabilities
n
Decision Making with Probabilities
n
Risk Analysis and Sensitivity Analysis
n
Decision Analysis with Sample Information
n
Computing Branch Probabilities
 
 
Problem Formulation
 
n
A decision problem is characterized by decision
alternatives, states of nature, and resulting payoffs.
n
The 
decision alternatives
 are the different possible
strategies the decision maker can employ.
n
The 
states of nature
 refer to future events, not
under the control of the decision maker, which
may occur.  States of nature should be defined so
that they are mutually exclusive and collectively
exhaustive.
 
 
Payoff Tables
 
n
The consequence resulting from a specific
combination of a decision alternative and a state of
nature is a 
payoff
.
n
A table showing payoffs for all combinations of
decision alternatives and states of nature is a 
payoff
table
.
n
Payoffs can be expressed in terms of 
profit
, 
cost
, 
time
,
distance
 or any other appropriate measure.
 
 
Decision Trees
 
n
A 
decision tree
 is a chronological representation of
the decision problem.
n
Each decision tree has two types of nodes;  
round
nodes
 correspond to the states of nature while 
square
nodes
 correspond to the decision alternatives.
n
The 
branches
 leaving each round node represent the
different states of nature while the branches leaving
each square node represent the different decision
alternatives.
n
At the end of each limb of a tree are the payoffs
attained from the series of branches making up that
limb.
 
 
Decision Tree Example
 
 
Decision Making without Probabilities
 
n
Three commonly used criteria for decision making
when probability information regarding the
likelihood of the states of nature is unavailable are:
the 
optimistic
 approach
the 
conservative
 approach
the 
minimax regret
 approach.
 
 
Optimistic Approach
 
n
The 
optimistic approach
 would be used by an
optimistic decision maker.
n
The 
decision with the largest possible payoff
 is
chosen.
n
If the payoff table was in terms of costs, the 
decision
with the lowest cost
 would be chosen.
 
 
Conservative Approach
 
n
The 
conservative approach
 would be used by a
conservative decision maker.
n
For each decision the minimum payoff is listed and
then the decision corresponding to the maximum
of these minimum payoffs is selected.  (Hence, the
minimum possible payoff is maximized
.)
n
If the payoff was in terms of costs, the maximum
costs would be determined for each decision and
then the decision corresponding to the minimum
of these maximum costs is selected.  (Hence, the
maximum possible cost is minimized
.)
 
 
Minimax Regret Approach
 
n
The minimax regret approach requires the
construction of a 
regret table
 or an 
opportunity
loss table
.
n
This is done by calculating for each state of nature
the difference between each payoff and the largest
payoff for that state of nature.
n
Then, using this regret table, the maximum regret
for each possible decision is listed.
n
The decision chosen is the one corresponding to
the 
minimum of the maximum regrets
.
 
 
Example
 
  
Consider the following problem with three
decision alternatives and three states of nature with
the following payoff table representing profits:
   
 
States of Nature
                        
 
        
 
       
s
1
      
s
2
      
s
3
                        
 
        
d
1
        4       4      -2
     
  
Decisions
   
d
2
        0       3      -1
                                
 
        
d
3
        1       5      -3
 
 
Example:  Optimistic Approach
 
  
An optimistic decision maker would use the
optimistic (maximax) approach.  We choose the
decision that has the largest single value in the
payoff table.
 
   
      
  
      Maximum
 
Decision
       
Payoff
   
     
 
     
d
1
                  4
    
     
d
2
                  3
 
  
 
  
     
d
3
                  5
 
Maximax
Maximax
payoff
payoff
Maximax
Maximax
decision
decision
 
 
Example:  Conservative Approach
 
  
A conservative decision maker would use the
conservative (maximin) approach.  List the minimum
payoff for each decision.  Choose the decision with
the maximum of these minimum payoffs.
 
                    
   
      Minimum
 
Decision
       
Payoff
   
     
 
     
d
1
                 -2
    
     
d
2
                 -1
                                  
 
     
d
3
                 -3
Maximin
Maximin
decision
decision
Maximin
Maximin
payoff
payoff
 
 
  
For the minimax regret approach, first compute a
regret table by subtracting each payoff in a column
from the largest payoff in that column.  In this
example, in the first column subtract 4, 0, and 1 from
4;  etc. The resulting regret table is:
 
                               
 
           
s
1
     
s
2
     
s
3
 
   
             
d
1
       0      1      1
                     
 
             
d
2
       4      2      0
                     
 
             
d
3
       3      0      2
 
 
Example:  Minimax Regret Approach
 
 
  
For each decision list the maximum regret.
Choose the decision with the minimum of these
values.
     
    Maximum
                                 
Decision
       
Regret
 
 
 
  
    
            
   
d
1
                  1
                                       
d
2
                  4
                                       
d
3
                  3
 
Example:  Minimax Regret Approach
Minimax
Minimax
decision
decision
Minimax
Minimax
regret
regret
 
 
Decision Making with Probabilities
 
n
Expected Value Approach
Expected Value Approach
If probabilistic information regarding the states
If probabilistic information regarding the states
of nature is available, one may use the 
of nature is available, one may use the 
expected
expected
value (EV) approach
value (EV) approach
.
.
Here the expected return for each decision is
Here the expected return for each decision is
calculated by summing the products of the
calculated by summing the products of the
payoff under each state of nature and the
payoff under each state of nature and the
probability of the respective state of nature
probability of the respective state of nature
occurring.
occurring.
The decision yielding the 
The decision yielding the 
best expected return
best expected return
 is
 is
chosen.
chosen.
 
 
n
The 
expected value of a decision alternative
 is the
sum of weighted payoffs for the decision alternative.
n
The expected value (EV) of decision alternative 
d
i
  is
defined as:
 
 
 
 
where:      
N
 = the number of states of nature
  
      
P
(
s
j 
) = the probability of state of nature 
s
j
  
         
V
ij 
 = the payoff corresponding to decision
  
       alternative 
d
i
  and state of nature 
s
j
 
Expected Value of a Decision Alternative
 
 
Example:  Burger Prince
 
  
Burger Prince Restaurant is considering opening
a new restaurant on Main Street.  It has three
 
different models, each with a different
 
seating capacity.  Burger Prince
 
estimates that the average number of
 
customers per hour will be 80, 100, or
 
120.  The payoff table for the three
 
models is on the next slide.
 
 
Payoff Table
 
 
 
  
     
 
Average Number of Customers Per Hour
                    
 
    
 
s
1
 = 80     
s
2
 = 100     
s
3
 = 120
    
  
Model A        $10,000     $15,000      $14,000
    
  
Model B         $ 8,000     $18,000      $12,000
    
  
Model C         $ 6,000     $16,000      $21,000
 
 
 
Expected Value Approach
 
 
  
Calculate the expected value for each decision.
The  decision tree on the next slide can assist in this
calculation.  Here 
d
1
, 
d
2
, 
d
3 
 represent the decision
alternatives of models A, B, C, and 
s
1
, 
s
2
, 
s
3 
 represent
the states of nature of 80, 100, and 120.
 
 
Decision Tree
1
1
 
.2
.2
 
.4
.4
 
.4
.4
 
.4
.4
 
.2
.2
 
.4
.4
 
.4
.4
 
.2
.2
 
.4
.4
 
d
d
1
1
 
d
d
2
2
 
d
d
3
3
 
s
s
1
1
 
s
s
1
1
 
s
s
1
1
 
s
s
2
2
 
s
s
3
3
 
s
s
2
2
 
s
s
2
2
 
s
s
3
3
 
s
s
3
3
 
Payoffs
Payoffs
 
10,000
10,000
 
15,000
15,000
 
14,000
14,000
 
8,000
8,000
 
18,000
18,000
 
12,000
12,000
 
6,000
6,000
 
16,000
16,000
 
21,000
21,000
2
2
3
3
4
4
 
 
Expected Value for Each Decision
Expected Value for Each Decision
 
 
 
 
 
 
 
 
 
 
 
 
  
  
Choose the model with largest EV,  Model C.
Choose the model with largest EV,  Model C.
3
3
 
d
d
1
1
 
d
d
2
2
 
d
d
3
3
 
EMV = .4(10,000) + .2(15,000) + .4(14,000)
EMV = .4(10,000) + .2(15,000) + .4(14,000)
          = $12,600
          = $12,600
 
EMV = .4(8,000) + .2(18,000) + .4(12,000)
EMV = .4(8,000) + .2(18,000) + .4(12,000)
          =  $11,600
          =  $11,600
 
EMV = .4(6,000) + .2(16,000) + .4(21,000)
EMV = .4(6,000) + .2(16,000) + .4(21,000)
          = $14,000
          = $14,000
 
Model A
Model A
 
Model B
Model B
 
Model C
Model C
2
2
1
1
4
4
 
 
Expected Value of Perfect Information
 
n
Frequently information is available which can
improve the probability estimates for the states of
nature.
n
The 
expected value of perfect information
 (EVPI) is
the increase in the expected profit that would
result if one knew with certainty which state of
nature would occur.
n
The EVPI provides an 
upper bound on the
expected value of any sample or survey
information
.
 
 
Expected Value of Perfect Information
 
n
EVPI Calculation
EVPI Calculation
Step 1:
Step 1:
  
  
    Determine the optimal return corresponding to
    Determine the optimal return corresponding to
each state of nature.
each state of nature.
Step 2:
Step 2:
  
  
    Compute the expected value of these optimal
    Compute the expected value of these optimal
returns.
returns.
Step 3:
Step 3:
  
  
    Subtract the EV of the optimal decision from the
    Subtract the EV of the optimal decision from the
amount determined in step (2).
amount determined in step (2).
 
 
 
 
 
Calculate the expected value for the optimum
payoff for each state of nature and subtract the EV of
the optimal decision.
 
 EVPI= EVwPI - EVwoPI
 
     =.4(10,000) + .2(18,000) + .4(21,000) - 14,000 = $2,000
 
Expected Value of Perfect Information
 
 
Risk Analysis
 
n
Risk analysis
 helps the decision maker recognize the
difference between:
the expected value of a decision alternative, and
the payoff that might actually occur
n
The 
risk profile
 for a decision alternative shows the
possible payoffs for the decision alternative along
with their associated probabilities.
 
 
Risk Profile
 
n
Model C Decision Alternative
Model C Decision Alternative
 
.10
 
.20
 
.30
 
.40
 
.50
 
5       10     15      20     25
 
Probability
 
Profit ($thousands)
 
 
Sensitivity Analysis
 
n
Sensitivity analysis
 can be used to determine how
changes to the following inputs affect the
recommended decision alternative:
probabilities for the states of nature
values of the payoffs
n
If a small change in the value of one of the inputs
causes a change in the recommended decision
alternative, extra effort and care should be taken in
estimating the input value.
 
Slide Note
Embed
Share

Decision analysis involves problem formulation, decision making with and without probabilities, risk analysis, and sensitivity analysis. It includes defining decision alternatives, states of nature, and payoffs, creating payoff tables, decision trees, and using different decision-making criteria. With methods like optimistic and conservative approaches, decision makers navigate uncertainty to make informed choices.

  • Decision analysis
  • Problem formulation
  • Decision making
  • Risk analysis
  • Sensitivity analysis

Uploaded on Sep 25, 2024 | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

E N D

Presentation Transcript


  1. Chapter 8 Decision Analysis Problem Formulation Decision Making without Probabilities Decision Making with Probabilities Risk Analysis and Sensitivity Analysis Decision Analysis with Sample Information Computing Branch Probabilities

  2. Problem Formulation A decision problem is characterized by decision alternatives, states of nature, and resulting payoffs. The decision alternatives are the different possible strategies the decision maker can employ. The states of nature refer to future events, not under the control of the decision maker, which may occur. States of nature should be defined so that they are mutually exclusive and collectively exhaustive.

  3. Payoff Tables The consequence resulting from a specific combination of a decision alternative and a state of nature is a payoff. A table showing payoffs for all combinations of decision alternatives and states of nature is a payoff table. Payoffs can be expressed in terms of profit, cost, time, distance or any other appropriate measure.

  4. Decision Trees A decision tree is a chronological representation of the decision problem. Each decision tree has two types of nodes; round nodes correspond to the states of nature while square nodes correspond to the decision alternatives. The branches leaving each round node represent the different states of nature while the branches leaving each square node represent the different decision alternatives. At the end of each limb of a tree are the payoffs attained from the series of branches making up that limb.

  5. Decision Tree Example

  6. Decision Making without Probabilities Three commonly used criteria for decision making when probability information regarding the likelihood of the states of nature is unavailable are: the optimistic approach the conservative approach the minimax regret approach.

  7. Optimistic Approach The optimistic approach would be used by an optimistic decision maker. The decision with the largest possible payoff is chosen. If the payoff table was in terms of costs, the decision with the lowest cost would be chosen.

  8. Conservative Approach The conservative approach would be used by a conservative decision maker. For each decision the minimum payoff is listed and then the decision corresponding to the maximum of these minimum payoffs is selected. (Hence, the minimum possible payoff is maximized.) If the payoff was in terms of costs, the maximum costs would be determined for each decision and then the decision corresponding to the minimum of these maximum costs is selected. (Hence, the maximum possible cost is minimized.)

  9. Minimax Regret Approach The minimax regret approach requires the construction of a regret table or an opportunity loss table. This is done by calculating for each state of nature the difference between each payoff and the largest payoff for that state of nature. Then, using this regret table, the maximum regret for each possible decision is listed. The decision chosen is the one corresponding to the minimum of the maximum regrets.

  10. Example Consider the following problem with three decision alternatives and three states of nature with the following payoff table representing profits: States of Nature s1 s2 s3 d1 4 4 -2 0 3 -1 1 5 -3 Decisions d2 d3

  11. Example: Optimistic Approach An optimistic decision maker would use the optimistic (maximax) approach. We choose the decision that has the largest single value in the payoff table. Maximum Payoff 4 3 5 Decision d1 d2 d3 Maximax payoff Maximax decision

  12. Example: Conservative Approach conservative (maximin) approach. List the minimum payoff for each decision. Choose the decision with the maximum of these minimum payoffs. A conservative decision maker would use the Maximin decision Minimum Decision Payoff d1 -2 d2 -1 Maximin payoff d3 -3

  13. Example: Minimax Regret Approach regret table by subtracting each payoff in a column from the largest payoff in that column. In this example, in the first column subtract 4, 0, and 1 from 4; etc. The resulting regret table is: For the minimax regret approach, first compute a s1s2s3 d2 4 2 0 d3 3 0 2 d1 0 1 1

  14. Example: Minimax Regret Approach Choose the decision with the minimum of these values. Decision Regret d1 1 d2 4 d3 3 decision For each decision list the maximum regret. Maximum Minimax regret Minimax

  15. Decision Making with Probabilities Expected Value Approach If probabilistic information regarding the states of nature is available, one may use the expected value (EV) approach. Here the expected return for each decision is calculated by summing the products of the payoff under each state of nature and the probability of the respective state of nature occurring. The decision yielding the best expected return is chosen.

  16. Expected Value of a Decision Alternative The expected value of a decision alternative is the sum of weighted payoffs for the decision alternative. The expected value (EV) of decision alternative di is defined as: N N = = = = EV( EV( ) ) ( ) P s V ( ) P s V d d i i j j ij ij 1 1 j j where: N = the number of states of nature P(sj ) = the probability of state of nature sj Vij = the payoff corresponding to decision alternative di and state of nature sj

  17. Example: Burger Prince a new restaurant on Main Street. It has three different models, each with a different seating capacity. Burger Prince estimates that the average number of customers per hour will be 80, 100, or 120. The payoff table for the three models is on the next slide. Burger Prince Restaurant is considering opening

  18. Payoff Table Average Number of Customers Per Hour s1 = 80 s2 = 100 s3 = 120 Model A $10,000 $15,000 $14,000 Model B $ 8,000 $18,000 $12,000 Model C $ 6,000 $16,000 $21,000

  19. Expected Value Approach Calculate the expected value for each decision. The decision tree on the next slide can assist in this calculation. Here d1, d2, d3 represent the decision alternatives of models A, B, C, and s1, s2, s3 represent the states of nature of 80, 100, and 120.

  20. Decision Tree Payoffs 10,000 15,000 14,000 8,000 18,000 12,000 6,000 16,000 21,000 .4 s1 s2 s3 .2 .4 2 d1 .4 .2 s1 s2 s3 d2 1 3 d3 .4 .4 .2 s1 s2 s3 4 .4

  21. Expected Value for Each Decision EMV = .4(10,000) + .2(15,000) + .4(14,000) = $12,600 2 d1 Model A EMV = .4(8,000) + .2(18,000) + .4(12,000) = $11,600 d2 Model B 1 3 d3 EMV = .4(6,000) + .2(16,000) + .4(21,000) = $14,000 4 Model C Choose the model with largest EV, Model C.

  22. Expected Value of Perfect Information Frequently information is available which can improve the probability estimates for the states of nature. The expected value of perfect information (EVPI) is the increase in the expected profit that would result if one knew with certainty which state of nature would occur. The EVPI provides an upper bound on the expected value of any sample or survey information.

  23. Expected Value of Perfect Information EVPI Calculation Step 1: Determine the optimal return corresponding to each state of nature. Step 2: Compute the expected value of these optimal returns. Step 3: Subtract the EV of the optimal decision from the amount determined in step (2).

  24. Expected Value of Perfect Information payoff for each state of nature and subtract the EV of the optimal decision. Calculate the expected value for the optimum EVPI= EVwPI - EVwoPI =.4(10,000) + .2(18,000) + .4(21,000) - 14,000 = $2,000

  25. Risk Analysis Risk analysis helps the decision maker recognize the difference between: the expected value of a decision alternative, and the payoff that might actually occur The risk profile for a decision alternative shows the possible payoffs for the decision alternative along with their associated probabilities.

  26. Risk Profile Model C Decision Alternative .50 .40 Probability .30 .20 .10 5 10 15 20 25

  27. Sensitivity Analysis Sensitivity analysis can be used to determine how changes to the following inputs affect the recommended decision alternative: probabilities for the states of nature values of the payoffs If a small change in the value of one of the inputs causes a change in the recommended decision alternative, extra effort and care should be taken in estimating the input value.

Related


More Related Content

giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#