How to project LiDAR points into camera according to literature

195 Views Asked by At

For my project I was trying 3D LiDAR to Camera projection. I was recommended MATLAB LiDAR-Camera modules for calibration and then use the results for the projection on a live stream of the data.

In MATLAB, I got the rotation matrix ( R ), the translation matrix ( T ) and the camera intrinsic matrix ( M ). When using the MATLAB tools I am getting the results as follows : MATLAB generated projection for the same values of R, T and M

But using the literature in which the projection matrix is given as

( M 0 )x( [[ R T ], [ 0 1]] ) to get from a [x y z 1] to [u v w], I am getting the following result :

Projection of M, R and T as per the literature

M, R,T as calculated from MATLAB


M = array([[904.4679,   0.    , 596.9176],
       [  0.    , 814.7088, 349.8212],
       [  0.    ,   0.    ,   1.    ]])

R = array([[ 0.124 , -0.0038,  0.9923],
       [-0.9912,  0.0474,  0.124 ],
       [-0.0475, -0.9989,  0.0021]])

T = array([[-0.56  ,  0.241 , -0.4454]])

A code snippet looks like this

rotation = np.array([[0.1240,-0.0038,0.9923],                                                       [-0.9912,0.0474,0.1240],[-0.0475,-0.9989,0.0021]])
traslation = np.array([[-0.5600,0.2410,-0.4454]])
traslation_1 = np.array([[-0.4454,0.2410,-0.5600]])
intrinsic = np.array([[904.4679,0,596.9176],[0,814.7088,349.8212],[0,0,1]])

a = np.concatenate((rotation,np.array([[0,0,0]])), axis =0)
b = np.concatenate((traslation_1.T, np.array([[1]])), axis =0)
c = np.concatenate((a,b), axis =1)

print('\n Extrinsic:\n \n',c)
d = np.concatenate((intrinsic, np.array([[0,0,0]]).T), axis =1)
print('\n Intrinsic:\n \n',d)
e = np.matmul(d,c)
print("\n Final:\n \n", e)
df = pd.read_csv('out_file.csv')

img = cv2.imread('images/0001.png')
v1_max = 0
v2_max = 0
uv = []
import matplotlib.pyplot as plt 
 
for i in range(df.shape[0]):
    point = np.array([[df['x'].iloc[i],df['y'].iloc[i],df['z'].iloc[i],1]]).T
    v = np.matmul(e, point)
    v = v/v[2]
    if v[0]<=720 and v[0] > 0  and v[1] < 1280 and v[1] > 0:
        img[int(np.floor(v[0])),int(np.floor(v[1]))] = [0,255,255] 
            
                      
        
cv2.imwrite('file.png', img) 

Hardware configuration :

Velodyne 64 Channel LiDAR @ 10 HZ

Camera : 1280*720 monocular camera

Checkerboard : 10*7 with 10 cm of pattern and a padding .

I need some help as to how I cam take the parameters from the MATLAB and do the same in the scripts and get the near results .

I am expecting as to where I am going wrong when projecting the LiDAR points onto the camera, as to when I am using MATLAB functions I am getting a correct projection. Same is not there with the literature.

0

There are 0 best solutions below