I'm normalizing and rescaling my training set with:
# zero mean
feat = (feat - feat.mean()) / feat.std()
# scale between -1, 1
feat = ((feat - feat.min()) / (feat.max() - feat.min())) * 2 - 1
This works great. I transform the test set in the exact same way, using the mean, STD, min, max from the training set. This works fine if the mean and max in the test set are the same as the training set. However, if the range of the untransformed feature in the test set is different, then I'll have values beyond -1, 1 after rescaling. How can this be addressed?
I think the only way is to normalize your data with the min and max of all data (training and testing set toghether).