I'm implementing a Restricted Boltzmann Machine with Rectified Linear Units. I haven't found a simple implementation anywhere so wanted to ask if somebody would kindly verify the design.
Here is the CD1 calculation:
def propup(self, vis):
activation = numpy.dot(vis, self.W) + self.hbias
# ReLU activation of hidden units
return activation * (activation > 0)
def sample_h_given_v(self, v0_sample):
h1_mean = self.propup(v0_sample)
# Sampling from a rectified Normal distribution
h1_sample = numpy.maximum(0, h1_mean + numpy.random.normal(0, sigmoid(h1_mean)))
return [h1_mean, h1_sample]
def propdown(self, hid):
activation = numpy.dot(hid, self.W.T) + self.vbias
return sigmoid(activation)
def sample_v_given_h(self, h0_sample):
v1_mean = self.propdown(h0_sample)
v1_sample = self.numpy_rng.binomial(size=v1_mean.shape, n=1, p=v1_mean)
return [v1_mean, v1_sample]
This is how I calculate the gradient:
def get_cost_updates(self, lr, decay, mom, l1_penalty, p_noise, epoch, persistent=None, k=1):
ph_mean, ph_sample = self.sample_h_given_v(input)
nv_means, nv_samples,nh_means, nh_samples = self.gibbs_hvh(ph_sample)
W_grad = numpy.dot(self.input.T, ph_mean) - numpy.dot(nv_samples.T, nh_means)
vbias_grad = numpy.mean(self.input - nv_samples, axis=0)
hbias_grad = numpy.mean(ph_mean - nh_means, axis=0)
My question is, how do I layer these into a DBN?
The aim is to construct an autoencoder, but I'm not sure how to handle the visible units also being real number variables in the second layer.
I can see that question was asked some time ago, but as there is no answer, I will add mine. DBN as you wrote is implemented with a greedy learning algorithm that takes each layer to be as if it is a RBM. I actually gave a lecture about it recently and you can find a presentation with a numeric example I used here:https://www.slideshare.net/mobile/AvnerGidron/generative-models/AvnerGidron/generative-models
I think that if you will understand the presentation it shouldn't take really long for you to do it yourself.