I am using the Temporal Fusion Transformer from the pytorch-forecasting module for a Time Series Prediction problem.
Let's say I want to predict the next time step from the previous 5 timesteps. Now, in order to do this, from what I understand, we just set the following:
test = TimeSeriesDataSet(
data=data,
max_encoder_length=5,
max_prediction_length=1,
)
Now, when we train the Temporal Fusion Transformer from this Time Series Dataset, I want to understand how it is handling the Series exactly.
- Is it using a Slding Window approach of some sorts here;
[0, 1, 2, 3, 4] -> 5, [1, 2, 3, 4, 5] -> 6
, or is it[0, 1, 2, 3, 4] -> 5, [5, 6, 7, 8, 9] -> 10
- What happens to the Series that aren't of the correct length? Say we chop the Series in to steps of 5, what if we get to the end and there are only 2 steps in the last series for example?