Reinforcement Learning/Temporal Difference Learning

From testwiki
Revision as of 09:32, 26 October 2023 by 193.207.161.109 (talk)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Temporal difference (TD) learning is a central and novel idea in reinforcement learning.

  • It is a combination of Monte Carlo and dynamic programing methods
  • It is a Model-free learning algorithm
  • It both bootstraps (builds on top of previous best estimate) and samples
  • It can an be used for both episodic or infinite-horizon (non-episodic) domains
  • Immediately updates estimate of V after each (s,a,r,s)
  • Requires the system to be Markovian
  • Biased estimator of value function but often much lower variance than Monte Carlo estimator
  • Converges to true value in finite state cases, but does not always converge with infinite number of states (known as function approximation)

Algorithm Temporal Difference Learning TD(0)

TD learning can be applied as a spectrum between pure Monte Carlo and dynamic programing, but the simplest TD learning is as follows

  • Input: α
  • Initialize Vπ(s)=0,sS
  • Loop
    • Sample tuple (st,at,rt,st+1)
    • Update Vπ(st)Vπ(s)=Vπ(s)+α([rt+γVπ(st+1)]TD targetVπ(s))

Temporal difference error is defined asδt=rt+γVπ(st+1)Vπ(st)

n-step Return

n=0: G(0)=Rt+γV(st+1) is TD(0)

n=1: G(1)=Rt+γRt+1+γ2V(st+1)

and so on up to infinity

n=: G()=Rt+γRt+1+...+γT1V(st+T1) is MC

Is defined as n-step learning TD(n)

Vπ(s)=Vπ(s)+α[G(n)Vπ(st+1)])