I am developing an app which creates spherical panoramas. I'm using ARKit for that. I made a button and named it Capture. What I do is that every time the user clicks Capture Button, it takes a snapshot, then it creates a plane using device point of view and uses the snapshot image as diffuse for that plane.
My end goal is to export all those planes stitched into one image to make a spherical panorama. Can anyone guide me in right direction?
I've tried using OpenCV but doesn't work when I take photos of ceilings or the floor. Also, it uses a lot of cpu memory. So far after spending more than a month I'm only able to create a regular panorama with openCV, and that too by stitching images in small batches and then stitching those stitched images to make the final image. Also, it works ok when you place your phone on a tripod. As long as the camera doesn't move much along x y and z axis it works ok.
So I guess the only two options I'm left with are exporting ARKit scene with multiple planes (with photos on them) or using phone's gyro data to stitch images.
I'm guessing that using gyro data to stich images will be extremely complicated in itself. Can anyone point me in the right direction?