Just recently, I was wondering why nobody seems to use denoising training for Restricted Boltzmann Machine (RBM) and Convolutional-RBM (CRBM) models.
This is very powerful for auto-encoders (Denoising Auto-Encoders (DAE) and Stacked-DAE (SDAE)).
I tried to apply denoising on my code by simply corrupting the inputs at each epoch, the same that is done on my auto-encoder. For the auto-encoder it works quite well, but it fails badly for the RBM making learning more unstable and learned features worse. Is that simply because of the stochastic nature of the RBM that already tries to handle the noise ?
Is there a reason why people don't use denoising training with RBM and CRBM ?
Thanks