Can I use Reinforcment Learning for a problem that has a non continous observation space?

48 Views Asked by At

I want to train a agent to place a polyomino (only one for example a square of the measure of 2x2) on a 9x9 field, that is either empty or already contains multiple OTHER (not the 2x2 square one) polyominos. So the observation space would not be continuous. Is this a proper use case for RL?

1

There are 1 best solutions below

0
On

Sure, why not? The simplest versions of reinforcement learning algorithms use a discrete state space (and, indeed, assume for convergence that the agent is able to visit each state sufficiently many times). Even if there are too many states and you have to replace the Q function by a learned approximation (probably a neural net), you can use a one-hot encoding for the input.