I want to create a depth histogram of an image to see how the distribution of the depth values vary. But I don’t know how to do it because there are too many possible depths and counting each one would result in a histogram with a lot of bins. Like 307,200 bins from an image of (480*640).
In the following webpage:
They divided the number of depth values by 4 then the performed the bit shift adjustment on the data to create a reasonable looking display:
for (int i = 0; i < PImage.Bits.Length; i += 2)
{
temp= (PImage.Bits[i+1]<<8 |
PImage.Bits[i])& 0x1FFF ;
count[temp >> 2]++;
temp <<= 2;
PImage.Bits[i] = (byte) (temp & 0xFF);
PImage.Bits[i + 1] = (byte) (temp >> 8);
}
I understand the operations that they did but I don’t understand how this method shrinks the data to 1/4
So, how can I show that information to create a reasonable looking display without using too many bins?
Any ideas?
Best regards,
This part explains it:
int[] count = new int[0x1FFF / 4 +1];
By dividing the depth values by 4 you are reducing the number of bins by lowering the resolution at which you are measuring different depths. This allows the size of the
count
array to be 4 times smaller.Based on your comment
I think you may be misunderstanding what the histogram is. The screen size has nothing to do with the number of bins. You only get one data point per different depth level measured in the whole scene, they are not correlated to screen position at all.
Explanation of code: