In général stereo vision is used for robotic applications the depth is few meters. In my case i want to estimate depth up to 4 km in theory that works( large baseline 70m hight focal lenght 660 MM) fine but in expérimental i didnt try does it worth does any body try it ?? What do you think ?
Stereo vision for long range up to 4 km
770 Views Asked by freelance medben At
1
There are 1 best solutions below
Related Questions in OPENCV
- Creating multiple instances of a class with different initializing values in Flutter
- OpenCV2 on CLion
- How to Draw Chinese Characters on a Picture with OpenCV
- Python cv2 imwrite() - RGB or BGR
- AttributeError: 'Sequential' object has no attribute 'predict_classes'. Did you mean: 'predict_step'?
- Search for an icon on an image OpenCV
- DJI Tello won't follow me
- Python OpenCV and PyGame threading issue
- Need help in Converting Python script (Uses Yolo's pose estimation) to an android app
- Line Segmentation Problem: How to detect lines and draw bounding box of that line on handwritten letters Using CV2
- Configure CmakeLists.txt to avoid manually copying dlls
- How to detect the exact boundary of a Sudoku using OpenCV when there are multiple external boundaries?
- AttributeError: 'Results' object has no attribute 'box'. can anyone explain this?
- How to generate a VPI warpmap for polynomial distortion correction?
- I am trying to make a project of object detection on kaggle notebook using yolo. and i am facing this error. here is my code and my error
Related Questions in COMPUTER-VISION
- Trained ML model with the camera module is not giving predictions
- what's the difference between "nn layout" and "nt layout"
- Sketch Guided Text to Image Generation
- Pneumonia detection, using transfer learning
- Search for an icon on an image OpenCV
- DJI Tello won't follow me
- Unable to open shape_predictor_68_face_landmarks.dat
- Line Segmentation Problem: How to detect lines and draw bounding box of that line on handwritten letters Using CV2
- The regression problem of predicting multiple outputs from two-dimensional inputs
- Detecting Circles and Ellipses from Point Arrays in Java
- How to generate a VPI warpmap for polynomial distortion correction?
- Finding 3D camera location from a known 2D symbol inside an image
- How can I overlay a 3D model onto a detected object in real-time using computer vision?
- CUDA driver initialization failed, you might not have a CUDA gpu
- Implementing Image Processing for Dimension Measurement in Arduino-based Packaging System
Related Questions in STEREOSCOPY
- How to Correct Epipolar Lines in Stereo Vision Matching in python?
- Image rectification for depth perception is not correct
- On the uniqueness of stereo camera calibration results
- Stereo Camera rectification using OpenCV
- Stereo Vision Real Distance Calculation
- Opencv disparity map: confusing units of measure
- converting a disparity map to a depth map using given calibration file
- Stereo vision for long range up to 4 km
- GL_OUT_OF_MEMORY using A-frame to display stereoscopic images
- High resolution stereoscopic 360 videos in A-Frame
- ROS Publish Stereo Image From 2 USB Cameras
- Projection error of stereo calibration difference
- Proximity Distortion in Depth Image
- Stereo Calibration and 3D Reconstruction Issue
- Can the baseline between two cameras be determined from an uncalibrated rectified image pair?
Trending Questions
- UIImageView Frame Doesn't Reflect Constraints
- Is it possible to use adb commands to click on a view by finding its ID?
- How to create a new web character symbol recognizable by html/javascript?
- Why isn't my CSS3 animation smooth in Google Chrome (but very smooth on other browsers)?
- Heap Gives Page Fault
- Connect ffmpeg to Visual Studio 2008
- Both Object- and ValueAnimator jumps when Duration is set above API LvL 24
- How to avoid default initialization of objects in std::vector?
- second argument of the command line arguments in a format other than char** argv or char* argv[]
- How to improve efficiency of algorithm which generates next lexicographic permutation?
- Navigating to the another actvity app getting crash in android
- How to read the particular message format in android and store in sqlite database?
- Resetting inventory status after order is cancelled
- Efficiently compute powers of X in SSE/AVX
- Insert into an external database using ajax and php : POST 500 (Internal Server Error)
Popular # Hahtags
Popular Questions
- How do I undo the most recent local commits in Git?
- How can I remove a specific item from an array in JavaScript?
- How do I delete a Git branch locally and remotely?
- Find all files containing a specific text (string) on Linux?
- How do I revert a Git repository to a previous commit?
- How do I create an HTML button that acts like a link?
- How do I check out a remote Git branch?
- How do I force "git pull" to overwrite local files?
- How do I list all files of a directory?
- How to check whether a string contains a substring in JavaScript?
- How do I redirect to another webpage?
- How can I iterate over rows in a Pandas DataFrame?
- How do I convert a String to an int in Java?
- Does Python have a string 'contains' substring method?
- How do I check if a string contains a specific word?
Sounds feasible, except for a few practical caveats.
All the same math applies, just scaled up by a factor of ~1000. An equivalent would be 70 mm baseline and working range of 3.5-4.5 meters. That's comparable to the human visual system and many consumer stereo cameras.
The "usual" calibration methods (chessboard/ChArUco) won't work at that scale, so you'd have to measure and calculate the relevant matrices yourself, and physically adjust the rest.
Extrinsics:
You could point them at infinity (star-filled sky, near horizon presumably) and align such that the pictures match precisely. Then the rotation matrix would be an identity matrix. Or you could point the cameras at the same point at 4 km distance. Then you'd calculate the rotation from that distance and the baseline. For precise physical adjustment, look for "gonio stages" (goniometer).
Translation would just be the baseline distance, which is given.
Intrinsics:
Focal length can be estimated by measuring the pixel size of an object of known length. This involves a little trigonometry.
Assume no distortion, which is a fair assumption for such long focal lengths. Distortion coefficients are all 0.