Kinect get depth pixels from arbitrary angle?

740 Views Asked by At

Is it possible to somehow get the depth pixels from a kinect in different angles? Like say the kinect is recording me from above and I would like to fetch the depth pixels as it was seeing me from the front?

I have seen examples of people using point clouds and can from that data rotate the created mesh (from these points) and even though the kinect is, say, recording down on the person from above one can still with these point clouds rotate the mesh as it was seen from the front or beneath one's feet (which is really cool!).

So can I perhaps from this create my own depth pixels out from a point cloud? Any pointers would be greatly appreciated.

1

There are 1 best solutions below

0
Placeable On

Okay I kind of solved this in a very ugly way. At least that is what I think. If anyone else has a better idea please post it!

So what I did was create a mesh from the point cloud (As seen in the ofxKinect example) drew that to an FBO and wrapped it around a shader that colored the depth value for the fragments. This way I get a colored range from [0-1].

After that, back on the CPU, I was able to fetch the pixels from the FBO using readToPixels and drew those pixels to an ofImage. From the ofImage I could sample the colors for each pixel on the image (now in a grayscaled depth range).

Sigh, now, looping through each pixel x & y I check the color for each pixel to and grab that value and do some calculations to see where that colored value lies in a 0-255 range (like the regular kinect.getDepthPixels(...) data) as:

int size = sizeof(unsigned char) * ( ofGetHeight() * ofGetWidth() );
unsigned char* p = (unsigned char*)malloc(size);
for (int x = 0; x < ofGetWidth(); x++)
{
    for (int y = 0; y < ofGetHeight(); y++)
    {
        ofColor col = sampleImg.getColor(x, y);
        float d = 0.0f;

        if (col.a != 0)
        {
            d = (float)(col.r * 3) / 765.0f;
            d = d * 255.0f;
        }

        int id = (y*1) + x;
        p[id] = d;
    }
}

From p I get a unsigned char array with values ranging from [0-255] like the kinect.getDepthPixels() function but instead based on the depth texture I now have the depth data from the point cloud.

This is not fully tested but I think it is a step in the right direction. And I am not too fond of this solution but I hope it helps someone else as I have been googling crazy all day long with not much help. I might've just complicated things a lot for me but we'll see.