We know that DDPG is a deterministic policy gradient method and the output of its policy network should be a certain action. But once I tried to let the output of policy network be a probability distribution of several actions, which means the length of output is more than one and every action has its own probability and the sum of them equals to 1. The form of output looks like that in the stochastic policy gradient method but the gradients are calculated and the network is updated in a DDPG way. In the end, I found the result looks quite good, but I don't understand why it works as the output form isn't exactly what the DDPG requires.
Can the output of DDPG policy network be a probability distribution instead of a certain action value?
300 Views Asked by JinZ At
1
There are 1 best solutions below
Related Questions in REINFORCEMENT-LEARNING
- Named entity recognition with a small data set (corpus)
- how can get SARSA code for gridworld model in R program?
- Incorporating Transition Probabilities in SARSA
- Minibatching in Stochastic Gradient Descent and in Q-Learning
- Connecting Python + Tensorflow to an Emulator in C++
- How to generate all legal state-action pairs of connect four?
- exploration and exploitation in Q-learning
- Counterintuitive results on multi-armed bandit exercise
- Deep neural network diverges after convergence
- Reinforcement learning algorithms for continuous states, discrete actions
- multiply numbers on all paths and get a number with minimum number of zeros
- Reinforcement learning in netlogo
- Parametrization of sparse sampling algorithms
- Function approximator and q-learning
- [Deep Q-Network]How to exclude ops at auto-differential of Tensorflow
Related Questions in POLICY-GRADIENT-DESCENT
- MlpPolicy only return 1 and -1 with action spece[-1,1]
- Convergence guarantee of Policy Gradient with function approximation
- ValueError: No gradients provided for any variable in policy gradient
- Reward not increasing while training a Bipedal System
- Action masking for continuous action space in reinforcement learning
- Parallel environments in Pong keep ending up in the same state despite random actions being taken
- python policy gradient reinforcement learning with continous action space is not working
- DDPG not converging for a simple control problem
- DDPG always choosing the boundaries actions
- Can the output of DDPG policy network be a probability distribution instead of a certain action value?
- How do you evaluate a trained reinforcement learning agent whether it is trained or not?
- One back-propagation pass in keras
- How to sample actions for a multi-dimensional continuous action space for REINFORCE algorithm
- How to accumulate my loss over mini batches then calculate my gradient
- Policy gradient in keras predicts only one action
Trending Questions
- UIImageView Frame Doesn't Reflect Constraints
- Is it possible to use adb commands to click on a view by finding its ID?
- How to create a new web character symbol recognizable by html/javascript?
- Why isn't my CSS3 animation smooth in Google Chrome (but very smooth on other browsers)?
- Heap Gives Page Fault
- Connect ffmpeg to Visual Studio 2008
- Both Object- and ValueAnimator jumps when Duration is set above API LvL 24
- How to avoid default initialization of objects in std::vector?
- second argument of the command line arguments in a format other than char** argv or char* argv[]
- How to improve efficiency of algorithm which generates next lexicographic permutation?
- Navigating to the another actvity app getting crash in android
- How to read the particular message format in android and store in sqlite database?
- Resetting inventory status after order is cancelled
- Efficiently compute powers of X in SSE/AVX
- Insert into an external database using ajax and php : POST 500 (Internal Server Error)
Popular Questions
- How do I undo the most recent local commits in Git?
- How can I remove a specific item from an array in JavaScript?
- How do I delete a Git branch locally and remotely?
- Find all files containing a specific text (string) on Linux?
- How do I revert a Git repository to a previous commit?
- How do I create an HTML button that acts like a link?
- How do I check out a remote Git branch?
- How do I force "git pull" to overwrite local files?
- How do I list all files of a directory?
- How to check whether a string contains a substring in JavaScript?
- How do I redirect to another webpage?
- How can I iterate over rows in a Pandas DataFrame?
- How do I convert a String to an int in Java?
- Does Python have a string 'contains' substring method?
- How do I check if a string contains a specific word?
It would work if you include also the gradient with respect to the distribution, otherwise it works just by chance.
If you do something like
Then this is regular stochastic gradient using a softmax distribution, which was very common back then before deterministic gradient (and still used sometimes).