Gibbs sampling gives small probabilities

124 Views Asked by At

As part of our final design project, we have to design a Gibbs sampler to denoise an image. We have chosen to use the Metropolis Algorithm instead of a regular Gibbs sampler. A rough sketch of the algorithm is as follows, all pixels are 0-255 greyscale values. Also, we are using a simple smoothness prior distribution.

main()
    get input image as img
    originalImg = img
    for k = 1 to 1000
        beta = 3/log(1+k)
        initialEnergy = energy(img,originalImg)
        for i = 0 to imageRows
            for j = 0 to imageCols
                img[i][j] = metropolisSample(beta,originalImg,img,initialEnergy,i,j)


energy(img,originalImg)
    for i = 1 to imageRows
        for j = 1 to imageCols
            ans += (img[i][j] - originalImg[i][j])^2 / (255*255)
            ans += (img[i][j] - image[i][j+1])^2 / (255*255)
            ans += (img[i][j] - image[i][j-1])^2 / (255*255)
            ans += (img[i][j] - image[i+1][j])^2 / (255*255)
            ans += (img[i][j] - image[i-1][j])^2 / (255*255)
    return ans


metropolisSample(beta,originalImg,img,initialEnergy,i,j)
    imageCopy = img
    imageCopy[i][j] = random int between 0 and 255
    newEnergy = energy(imageCopy,originalImg)
    if (newEnergy < initialEnergy)
        initialEnergy = newEnergy
        return imageCopy[i][j]
    else
        rand = random float between 0 and 1 
        prob = exp(-(1/beta) * (newEnergy - initialEnergy))
        if rand < prob
            initialEnergy = newEnergy
            return imageCopy[i][j]
        else
            return img[i][j]

That's pretty much the gist of the program. My issue is that in the step where I calculate probability

prob = exp(-(1/beta) * (newEnergy - initialEnergy))

The difference in energies is so large that the probability is almost always zero. What is the proper way to mitigate this? We have also tried the Gibbs sampling approach, but we run into a similar problem. The Gibbs sampler code is as follows. Instead of using metropolisSample, we use gibbsSample instead

gibbsSample(beta,originalImg,img,initialEnergy,i,j)
    imageCopy = img
    sum = 0
    for k = 0 to 255
        imageCopy[i][j] = k
        energies[k] = energy(imageCopy,originalImg)
        prob[k] = exp(-(1/beta) * energies[k])
        sum += prob[k]

    for k = 0 to 255
        prob[k] / sum

    for k = 1 to 255
        prob[k] = prob[k-1] + prob[k] //Convert our PDF to a CDF

    rand = random float between 0 and 1
    k = 0
    while (1)
        if (rand < prob[k])
            break
        k++
    initialEnergy = energy[k]
    return k

We were having similar issues with this implementation as well. When we calculated

prob[k] = exp(-(1/beta) * energies[k])

our energies were so large that our probabilities all went to zero. Theoretically, this shouldn't be an issue because we are summing them all up and then dividing by the sum, but the floating point representation just isn't accurate enough. What would be a good way to fix this?

2

There are 2 best solutions below

1
On

I think probability for Gibbs Sampling in Ising model should be

p = 1 / (1 + np.exp(-2 * beta * Energy(x,y)))
0
On

I know nothing about your specific problem, but my first response would be to scale the energies. Your pixels are in the range of 0..255, which is arbitrary. If the pixels were fractions between zero and one, you would have very different results.

If the energy units are in pixel^2, try dividing the energies by 256^2. Else, try dividing by 256.

Also, given that the data is fully random, it is possible that there are very high energies, and there should in fact not be high probabilities.

My lack of knowledge of your problem may have resulted in a useless answer. If so, please ignore it.