I'm trying to calibrate a camera with obvious radial lens distortion that's focused on a big screen which fills most of the image.
As I'm a lazy bum, I don't want to wave a calibration board around in front of the camera, and instead created this video here with the ARUCO calibration board rendered in various poses: https://youtu.be/D73H6IJdbkg
While this plays on the big screen, I just have the camera take a picture every two seconds, take the pile of images, and can then calculate the calibration parameters with the usual OpenCV methods (specifically, with calibrateCameraAruco).
However, the calibration result kinda sucks.
The intrinsic parameters (focal length, optical center) are pretty much accurate as far as I can tell (optical center is close to half the image resolution, focal length is somewhere around 300 pixels which seems about right for a wide-angle lens), but the radial distortion parameters that I get don't really seem to have any effect (as in, if I apply them to undistort the camera picture, it looks nearly identical as before, with more or less the same radial distortion).
What am I missing here?
Update:
As @ChristophRackwitz correctly pointed out, this entire idea is based on a wrong assumption. When I render the calibration board in various sizes and poses, I'm implicitly creating a second virtual camera with its own unknown set of parameters, and trying to do a camera calibration with the resulting images will just give me a random (?) mashup between the two parameter sets, so no shortcuts and back to the physical calibration board.