I was testing SARSA with lambda = 1 with Windy Grid World and if the exploration causes the same state-action pair to be visited many times before reaching the goal, the eligibility trace gets incremented each time without any decay, therefore it explodes and causes everything to overflow. How can this be avoided?
How to prevent the eligibility trace in SARSA with lambda = 1 from exploding for state-action pairs that are visited a huge number of times?
443 Views Asked by Ahmed El-Hinidy At
1
There are 1 best solutions below
Related Questions in REINFORCEMENT-LEARNING
- Named entity recognition with a small data set (corpus)
- how can get SARSA code for gridworld model in R program?
- Incorporating Transition Probabilities in SARSA
- Minibatching in Stochastic Gradient Descent and in Q-Learning
- Connecting Python + Tensorflow to an Emulator in C++
- How to generate all legal state-action pairs of connect four?
- exploration and exploitation in Q-learning
- Counterintuitive results on multi-armed bandit exercise
- Deep neural network diverges after convergence
- Reinforcement learning algorithms for continuous states, discrete actions
- multiply numbers on all paths and get a number with minimum number of zeros
- Reinforcement learning in netlogo
- Parametrization of sparse sampling algorithms
- Function approximator and q-learning
- [Deep Q-Network]How to exclude ops at auto-differential of Tensorflow
Related Questions in TEMPORAL-DIFFERENCE
- Delphi: EInvalidOp in neural network class (TD-lambda)
- BACI design: How to account for the difference in Before-After Control?
- Problem with Q-learning/TD(0) for Tic-Tac-Toe
- Not converge- Simple Actor Critic for Multi-discrete Action Space
- Updates in Temporal Difference Learning
- Deep Reinforcement Learning 1-step TD not converging
- Temporal Difference Learning and Back-propagation
- TD learning vs Q learning
- What's the point of using Temporal difference learning at all?
- My Neural Network isn't learning the right answers
- How to tell tell that my self-play Neural Network is overfitting
- Create n period differences in a panel in R
- Neural Network Reinforcement Learning Requiring Next-State Propagation For Backpropagation
- comparing temporal sequences
- Analysis over time comparing 2 dataframes row by row
Related Questions in SARSA
- how can get SARSA code for gridworld model in R program?
- Incorporating Transition Probabilities in SARSA
- Reward calculation for a SARSA model to reduce traffic congestion
- SARSA implementation with tensorflow
- Converting to Python scalars
- Implementing Sarsa(lambda) - Gridworld - in Julia language
- How to prevent the eligibility trace in SARSA with lambda = 1 from exploding for state-action pairs that are visited a huge number of times?
- Sarsa with neural network to solve the Mountain Car Task
- Episodic Semi-gradient Sarsa with Neural Network
- Helipad Co-ordinates of LunarLander v2 openai gym
- Implementing SARSA from Q-Learning algorithm in the frozen lake game
- is this true ? what about Expected SARSA and double Q-Learning?
- Implementing SARSA in Unity
- Zeta Variable of SARSA(lamda)
- Why is there no n-step Q-learning algorithm in Sutton's RL book?
Trending Questions
- UIImageView Frame Doesn't Reflect Constraints
- Is it possible to use adb commands to click on a view by finding its ID?
- How to create a new web character symbol recognizable by html/javascript?
- Why isn't my CSS3 animation smooth in Google Chrome (but very smooth on other browsers)?
- Heap Gives Page Fault
- Connect ffmpeg to Visual Studio 2008
- Both Object- and ValueAnimator jumps when Duration is set above API LvL 24
- How to avoid default initialization of objects in std::vector?
- second argument of the command line arguments in a format other than char** argv or char* argv[]
- How to improve efficiency of algorithm which generates next lexicographic permutation?
- Navigating to the another actvity app getting crash in android
- How to read the particular message format in android and store in sqlite database?
- Resetting inventory status after order is cancelled
- Efficiently compute powers of X in SSE/AVX
- Insert into an external database using ajax and php : POST 500 (Internal Server Error)
Popular Questions
- How do I undo the most recent local commits in Git?
- How can I remove a specific item from an array in JavaScript?
- How do I delete a Git branch locally and remotely?
- Find all files containing a specific text (string) on Linux?
- How do I revert a Git repository to a previous commit?
- How do I create an HTML button that acts like a link?
- How do I check out a remote Git branch?
- How do I force "git pull" to overwrite local files?
- How do I list all files of a directory?
- How to check whether a string contains a substring in JavaScript?
- How do I redirect to another webpage?
- How can I iterate over rows in a Pandas DataFrame?
- How do I convert a String to an int in Java?
- Does Python have a string 'contains' substring method?
- How do I check if a string contains a specific word?
If I've understood correctly your question, the problem is that the trace for a given state gets incremented too much. In this case, a potential solution is to use replacing traces instead of the classic incremental traces.
The idea in replacing traces is to reset the trace to a value (typically 1) each time the state is visited. The following figure illustrates the main difference between both kinds of traces:
You can find more information in the classical Sutton & Barto book Reinforcement Learning: An Introduction, especifically in Section 7.8.