I am using a depth camera which captures a 500x500 depth map in meters. I also have the corresponding RGB image. Now the function -
open3d.geometry.RGBDImage.create_from_color_and_depth(RGB_image, depth_image) requires depth image in png form. When I convert the depth map array to a png image, I get an image which is very dark because all the depths are in the range of 3m to 6m.
When I finally get the pointcloud using o3d.geometry.PointCloud.create_from_rgbd_image(), the pointcloud is sliced. By sliced I mean that there are discrete layers where a portion of the image is printed.
I think I am making a mistake in converting depth map to png image.
I also tried using depth_scale=1 argument in o3d.geometry.PointCloud.create_from_rgbd_image() but it seems like there is no argument depth_scale for this function but the documentation says that depths are scaled by 1/depth_scale.
How can I get the correct pointcloud using this method?
This is the depth file I am using - https://drive.google.com/file/d/1DtCj9eq0MyGhtndV4A7zvMAizSrqHIjP/view?usp=drive_link
To convert the depth array into png image I am using PIL library -
if image.mode != 'RGB':
image = image.convert('RGB')