How to write a multidimentional regression predictor using an RNN in tensorflow 0.11

1.1k Views Asked by At

This is a toy version of what I'm actually trying to do. I have very high dimensional input data (2e05 to 5e06 dimensions) over a large number of time-steps (150,000 steps). I understand I may need some embedding / compression of the states in the end (see this question). But lets set that aside for now.

Take this toy input data with 11 dimensions for example

t  Pattern
0  0,0,0,0,0,0,0,0,0,2,1 
1  0,0,0,0,0,0,0,0,2,1,0 
2  0,0,0,0,0,0,0,2,1,0,0 
n  ...

I want the RNN to learn to associate the current time-step with the next time step such that if the input (x) is t0 then the desired output (y) is t1.

The idea of using an RNN is so I can provide the network with only one time-step at a time (due to the large dimensionality of my real data). Since the number of inputs and outputs are the same, I'm not sure a basic RNN is appropriate. I looked a little at the seq2seq tutorial, but I'm not sure the encoder/decoder is needed for this application and I could not get anywhere using my toy data.

Following is all I've been able to come up with, but it does not converge at all. What am I missing?

import numpy as np
import tensorflow as tf

# Imports for loading CSV file
from tensorflow.python.platform import gfile 
import csv

# Input sequence
wholeSequence = [[0,0,0,0,0,0,0,0,0,2,1],
                 [0,0,0,0,0,0,0,0,2,1,0],
                 [0,0,0,0,0,0,0,2,1,0,0],
                 [0,0,0,0,0,0,2,1,0,0,0],
                 [0,0,0,0,0,2,1,0,0,0,0],
                 [0,0,0,0,2,1,0,0,0,0,0],
                 [0,0,0,2,1,0,0,0,0,0,0],
                 [0,0,2,1,0,0,0,0,0,0,0],
                 [0,2,1,0,0,0,0,0,0,0,0],
                 [2,1,0,0,0,0,0,0,0,0,0]]

data = np.array(wholeSequence[:-1], dtype=int) # all but last
target = np.array(wholeSequence[1:], dtype=int) # all but first
trainingSet = tf.contrib.learn.datasets.base.Dataset(data=data, target=target)
trainingSetDims = trainingSet.data.shape[1]

EPOCHS = 10000
PRINT_STEP = 1000

x_ = tf.placeholder(tf.float32, [None, trainingSetDims])
y_ = tf.placeholder(tf.float32, [None, trainingSetDims])

cell = tf.nn.rnn_cell.BasicRNNCell(num_units=trainingSetDims)

outputs, states = tf.nn.rnn(cell, [x_], dtype=tf.float32)
outputs = outputs[-1]

W = tf.Variable(tf.random_normal([trainingSetDims, 1]))     
b = tf.Variable(tf.random_normal([trainingSetDims]))

y = tf.matmul(outputs, W) + b

cost = tf.reduce_mean(tf.square(y - y_))
train_op = tf.train.RMSPropOptimizer(0.005, 0.2).minimize(cost)

with tf.Session() as sess:
    tf.initialize_all_variables().run()
    for i in range(EPOCHS):
        sess.run(train_op, feed_dict={x_:trainingSet.data, y_:trainingSet.target})
        if i % PRINT_STEP == 0:
            c = sess.run(cost, feed_dict={x_:trainingSet.data, y_:trainingSet.target})
            print('training cost:', c)

    response = sess.run(y, feed_dict={x_:trainingSet.data})
    print(response)

Where the approach comes from this thread.

In the end I would like to use an LSTM, and the point is to model the sequence such that an approximation of the whole sequence could be reconstructed by initiating the network with t0, and then feeding the prediction back as the next input.

EDIT1

I'm now seeing a reduction of cost since I've added the following code to rescale the histogram input data to a probability distribution before training:

# Convert hist to probability distribution
wholeSequence = np.array(wholeSequence, dtype=float) # Convert to NP array.
pdfSequence = wholeSequence*(1./np.sum(wholeSequence)) # Normalize to PD.

data = pdfSequence[:-1] # all but last
target = pdfSequence[1:] # all but first

The output still does not appear anything like the input, so I'm certainly missing something:

('training cost:', 0.49993864)
('training cost:', 0.0012213766)
('training cost:', 0.0010471855)
('training cost:', 0.00094231067)
('training cost:', 0.0008385859)
('training cost:', 0.00077578216)
('training cost:', 0.00071381911)
('training cost:', 0.00063783216)
('training cost:', 0.00061271922)
('training cost:', 0.00059178629)
[[ 0.02012676  0.02383044  0.02383044  0.02383044  0.02383044  0.02383044
   0.02383044  0.02383044  0.02383044  0.01642305  0.01271933]
 [ 0.02024871  0.02395239  0.02395239  0.02395239  0.02395239  0.02395239
   0.02395239  0.02395239  0.02395239  0.016545    0.01284128]
 [ 0.02013803  0.02384171  0.02384171  0.02384171  0.02384171  0.02384171
   0.02384171  0.02384171  0.02384171  0.01643431  0.0127306 ]
 [ 0.020188    0.02389169  0.02389169  0.02389169  0.02389169  0.02389169
   0.02389169  0.02389169  0.02389169  0.01648429  0.01278058]
 [ 0.02020025  0.02390394  0.02390394  0.02390394  0.02390394  0.02390394
   0.02390394  0.02390394  0.02390394  0.01649654  0.01279283]
 [ 0.02005926  0.02376294  0.02376294  0.02376294  0.02376294  0.02376294
   0.02376294  0.02376294  0.02376294  0.01635554  0.01265183]
 [ 0.02034193  0.02404562  0.02404562  0.02404562  0.02404562  0.02404562
   0.02404562  0.02404562  0.02404562  0.01663822  0.01293451]
 [ 0.02057907  0.02428275  0.02428275  0.02428275  0.02428275  0.02428275
   0.02428275  0.02428275  0.02428275  0.01687536  0.01317164]
 [ 0.02042386  0.02412754  0.02412754  0.02412754  0.02412754  0.02412754
   0.02412754  0.02412754  0.02412754  0.01672015  0.01301643]]
1

There are 1 best solutions below

0
On BEST ANSWER

I gave up on using tensowflow directly and ended up using Keras. Following is code that learns the toy sequence above using a single layer LSTM with a second dense layer:

import numpy as np

from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM

# Input sequence
wholeSequence = [[0,0,0,0,0,0,0,0,0,2,1],
                 [0,0,0,0,0,0,0,0,2,1,0],
                 [0,0,0,0,0,0,0,2,1,0,0],
                 [0,0,0,0,0,0,2,1,0,0,0],
                 [0,0,0,0,0,2,1,0,0,0,0],
                 [0,0,0,0,2,1,0,0,0,0,0],
                 [0,0,0,2,1,0,0,0,0,0,0],
                 [0,0,2,1,0,0,0,0,0,0,0],
                 [0,2,1,0,0,0,0,0,0,0,0],
                 [2,1,0,0,0,0,0,0,0,0,0]]

# Preprocess Data: (This does not work)
wholeSequence = np.array(wholeSequence, dtype=float) # Convert to NP array.
data = wholeSequence[:-1] # all but last
target = wholeSequence[1:] # all but first

# Reshape training data for Keras LSTM model
# The training data needs to be (batchIndex, timeStepIndex, dimentionIndex)
# Single batch, 9 time steps, 11 dimentions
data = data.reshape((1, 9, 11))
target = target.reshape((1, 9, 11))

# Build Model
model = Sequential()  
model.add(LSTM(11, input_shape=(9, 11), unroll=True, return_sequences=True))
model.add(Dense(11))
model.compile(loss='mean_absolute_error', optimizer='adam')
model.fit(data, target, nb_epoch=2000, batch_size=1, verbose=2)