LSTM and Dense layers preprocessing

582 Views Asked by At

I am trying to build NN with LSTM and Dense layers.

Me net is:

 MultiLayerConfiguration conf = new NeuralNetConfiguration.Builder()
            .seed(123)    
            .weightInit(WeightInit.XAVIER)
            .updater(new Adam(0.1))
            .list()
            .layer(0,  new LSTM.Builder().activation(Activation.TANH).nIn(numInputs).nOut(120).build())
            .layer(1, new DenseLayer.Builder().activation(Activation.RELU).nIn(120).nOut(1000).build())
            .layer(2, new DenseLayer.Builder().activation(Activation.RELU).nIn(1000).nOut(20).build())
            .layer(new OutputLayer.Builder(LossFunction.NEGATIVELOGLIKELIHOOD).activation(Activation.SOFTMAX).nIn(20).nOut(numOutputs).build())
            .inputPreProcessor(1, new RnnToFeedForwardPreProcessor())
            .build();

I read my data like that:

 SequenceRecordReader reader = new CSVSequenceRecordReader(0, ",");
        reader.initialize(new NumberedFileInputSplit("TRAIN_%d.csv", 1, 17476));
        DataSetIterator trainIter = new SequenceRecordReaderDataSetIterator(reader, miniBatchSize, 6, 7, false);
        allData = trainIter.next();


        //Load the test/evaluation data:
        SequenceRecordReader testReader = new CSVSequenceRecordReader(0, ",");
        testReader.initialize(new NumberedFileInputSplit("TEST_%d.csv", 1, 8498));
        DataSetIterator testIter = new SequenceRecordReaderDataSetIterator(testReader, miniBatchSize, 6, 7, false);
        allData = testIter.next();

So, when its going to net it has shape [batch, features, timestamp] = [32,7,60] I can define it with special made error like that:

Received input with size(1) = 7 (input array shape = [32, 7, 60]); input.size(1) must match layer nIn size (nIn = 9)

So its normally go to the net. After first LSTM layer it must reshape into 2-D and go throw Dense layer next.

But I have next problem:

Labels and preOutput must have equal shapes: got shapes [32, 6, 60] vs [1920, 6]

It did not reshape before going into Dense layer and I had missed 1 feature (now shape is 32, 6 , 60 instead of 32, 7 , 60), so why ???

1

There are 1 best solutions below

0
On

if possible you'll want to use setInputType which will set up the pre processors for you.

Here's an example configuration of lstm to dense:

 MultiLayerConfiguration conf1 = new NeuralNetConfiguration.Builder()
                    .trainingWorkspaceMode(wsm)
                    .inferenceWorkspaceMode(wsm)
                    .seed(12345)
                    .updater(new Adam(0.1))
                    .list()
                    .layer(new LSTM.Builder().nIn(3).nOut(3).dataFormat(rnnDataFormat).build())
                    .layer(new DenseLayer.Builder().nIn(3).nOut(3).activation(Activation.TANH).build())
                    .layer(new RnnOutputLayer.Builder().nIn(3).nOut(3).activation(Activation.SOFTMAX).dataFormat(rnnDataFormat)
                            .lossFunction(LossFunctions.LossFunction.MCXENT).build())
                    .setInputType(InputType.recurrent(3, rnnDataFormat))
                    .build();

RNN format is:

import org.deeplearning4j.nn.conf.RNNFormat;

This is an enum that specifies what your data format should be (channels last or first) From the javadoc:

/**
 * NCW = "channels first" - arrays of shape [minibatch, channels, width]<br>
 * NWC = "channels last" - arrays of shape [minibatch, width, channels]<br>
 * "width" corresponds to sequence length and "channels" corresponds to sequence item size.
 */

Source here: https://github.com/eclipse/deeplearning4j/blob/1930d9990810db6214829c716c2ae7eb7f59cd13/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/RNNFormat.java#L21

More here in our tests: https://github.com/eclipse/deeplearning4j/blob/1930d9990810db6214829c716c2ae7eb7f59cd13/deeplearning4j/deeplearning4j-core/src/test/java/org/deeplearning4j/nn/layers/recurrent/TestTimeDistributed.java#L58