I've trained an autoencoder using lasagne/nolearn. Suppose the network layers are [500, 100, 100, 500]. I've trained the neural net like so:
net.fit(X, X)
I want to do something like the following:
net.predict(X, layer=2)
so I'll get the suppressed representation of my data. So, if my initial data have a shape [10000, 500], the resulting data will be [10000, 100].
I searched but could not find how to do that. Is it possible with lasagne/nolearn?
Looks like the answer is here in the documentation: http://lasagne.readthedocs.org/en/latest/user/layers.html#propagating-data-through-layers
Here are the relevant parts:
Assuming
netis of typenolearn.lasagne.NeuralNetit looks like you can get access to the the underlying layer objects withnet.get_all_layers(). I don't see it in the documentation but it's here on line 592.