Modelling action use limit in Markov Decision Process

94 Views Asked by At

I have a Markov Decision Process with certain number of states and actions. I want to incorporate in my model, an action which can be used only once from any of the states, and when used cannot be used again. How do I model this action in my state diagram? I thought of having a separate state and using -inf for rewards but none of these seem to work out. Thanks!

1

There are 1 best solutions below

0
On

To satisfy the Markov property you have to include the information whether this action has been used previously in each state, there is no other way around it. This will make your state space larger but then your state diagram will then work out as you expect.

Assume that you have three states: S = {1,2,3} and two actions A={1,2} where each of the actions can only be used once from each state. Then you will now have states S = {(1,p1,p2), (2,p1,p2), (3,p1,p2)}, where p1 is a boolean whether action 1 has previously been used in this state and p2 is a boolean that tells whether action 2 has previously been used in this state. This means that in total you will now have 12 states: S={(1,0,0), (1,1,0), (1,0,1), (1,1,1), (2,0,0), (2,1,0), (2,0,1), (2,1,1), (3,0,0), (3,1,0), (3,0,1), (3,1,1)}