Reinforcement and Association in Behavioral Psychology

 
Reinforcement
 
CS786
25
th
 January 2022
 
Association vs reinforcement
 
Association: things that occur together in the
world, occur together in the mind
Tested using classical conditioning
Environment acts on the observer
Reinforcement: actions that are rewarded
become desirable in future
Tested using operant/instrumental conditioning
Observer acts on the environment
 
Operant conditioning
 
Observers act upon the world, and face
consequences
Consequences can be interpreted as rewards
 
Modeling classical conditioning
 
Most popular approach for years was the
Rescorla-Wagner model
 
 
 
Could reproduce a number of empirical
observations in classical conditioning
experiments
 
http://users.ipfw.edu/abbott/314/Rescorla2.htm
 
Some versions replace
V
tot
 with V
x
; what is
the difference?
 
 
 
Can modify to accommodate reward
prediction
 
Original equation
Update size based on 
associative strength
available
 
 
 Bush-Mosteller model of reinforcement, for
action 
a
 
The MDP framework
 
An MDP is the tuple {S,A,R,P}
Set of states (S)
Set of actions (A)
Possible rewards (R) for each {s,a} combination
P(s’|s,a) is the probability of reaching state s’
given you took action a while in state s
 
An example MDP
 
States: hungry, taste-deprived, full, happy, unhappy
Actions: go to hostel mess, delivery from restaurant,
make Maggi
Reward(state, action)
R(hungry, mess) = 10
R(taste-deprived, mess) = -100
State transition probability:
Hungry to full, maggi = 0.4
Taste-deprived to happy, mess = 0
 
Solution strategy
 
Update value and action policy iteratively
 
 
https://towardsdatascience.com/getting-started-with-markov-decision-processes-
reinforcement-learning-ada7b4572ffb
Slide Note
Embed
Share

This content delves into the concepts of reinforcement, association, and operant conditioning in behavioral psychology. It discusses how actions are influenced by rewards and consequences, the differences between association and reinforcement, and classical conditioning models like the Rescorla-Wagner model. Additionally, it explores the MDP framework and provides an example of an MDP scenario. The content emphasizes the importance of adapting strategies in reinforcement learning to optimize value and action policies iteratively.

  • Reinforcement
  • Association
  • Operant Conditioning
  • Classical Conditioning
  • MDP framework

Uploaded on Aug 14, 2024 | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

E N D

Presentation Transcript


  1. Reinforcement CS786 25th January 2022

  2. Association vs reinforcement Association: things that occur together in the world, occur together in the mind Tested using classical conditioning Environment acts on the observer Reinforcement: actions that are rewarded become desirable in future Tested using operant/instrumental conditioning Observer acts on the environment

  3. Operant conditioning Observers act upon the world, and face consequences Consequences can be interpreted as rewards

  4. Modeling classical conditioning Most popular approach for years was the Rescorla-Wagner model Some versions replace Vtot with Vx; what is the difference? Could reproduce a number of empirical observations in classical conditioning experiments http://users.ipfw.edu/abbott/314/Rescorla2.htm

  5. Can modify to accommodate reward prediction Original equation Update size based on associative strength available + 1 n n V V V = + ( ) X X tot Bush-Mosteller model of reinforcement, for action a + 1 n n n n V V V R = + ( ) a a a

  6. The MDP framework An MDP is the tuple {S,A,R,P} Set of states (S) Set of actions (A) Possible rewards (R) for each {s,a} combination P(s |s,a) is the probability of reaching state s given you took action a while in state s

  7. An example MDP States: hungry, taste-deprived, full, happy, unhappy Actions: go to hostel mess, delivery from restaurant, make Maggi Reward(state, action) R(hungry, mess) = 10 R(taste-deprived, mess) = -100 State transition probability: Hungry to full, maggi = 0.4 Taste-deprived to happy, mess = 0

  8. Solution strategy Update value and action policy iteratively https://towardsdatascience.com/getting-started-with-markov-decision-processes- reinforcement-learning-ada7b4572ffb

Related


More Related Content

giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#