Working on a project to do some prediction with PyBrain, however I want to know how I can restrict the domain of the NN's output in an easy way so that output layer only produces values in domain [0..1].
Currently, I get negatives values in some activations of the net.
I have searched through the PyBrain documentation without coming across any methods or method parameters that seem to indicate this is possible. Pertinent code sample below:
self.pybrain_net_date = buildNetwork(self.pb_indim, hidden_dim, hidden_dim, 1, hiddenclass=SigmoidLayer, outclass=LinearLayer, bias=True)
self.pybrain_net_amount = buildNetwork(self.pb_indim, hidden_dim, hidden_dim, 1, hiddenclass=SigmoidLayer, outclass=LinearLayer, bias=True)
trainer_date = BackpropTrainer(self.pybrain_net_date, self.pbds_train_date)
trainer_amount = BackpropTrainer(self.pybrain_net_amount, self.pbds_train_amount)
print trainer_date.trainUntilConvergence(self.pbds_train_date, 30, verbose=True, validationProportion=0.20)
print trainer_amount.trainUntilConvergence(self.pbds_train_amount, 30, verbose=True, validationProportion=0.20)
for index, row in input_dataframe.iterrows():
date = neuralnet_date.activate(row)
amount = neuralnet_amount.activate(row)
prediction_df['Expected Date'].ix[index] = date
prediction_df['Expected Amount'].ix[index] = amount
It looks like you're training for 30 epochs, which isn't enough for even small networks. Depending on your network size it's possible that some nodes haven't been trained properly. Are you trying to set continueEpochs=30? That's the fourth input to that function.
It's also possible that you're not normalizing your input data properly, if the first suggestion doesn't fix the problem can you please post an example of your training data?