all!
I'm a beginner on computer vision field :)
I have 2 questions about SfM and camera re-localization.
As I know, the general pipeline of SfM can be summarized as follow,
1. Extract local feature
2. Match local features(key points) between image pairs
3. Triangulation the key points between image pairs for estimating 3D point
Next, the camera re-localization operated.
My questios are here
- I'm currently researching about the effect of noised camera pose on SfM. But, I can not find when the GT camera pose input to above SfM pipeline. Aren't the 3D points estimated from triangulation real coordinate?
- If the answer of above question is NO, how can the camera re-localization work then?
Thanks for all kind answers in advance :)
I tried to make SfM model with 7scenes, cambridge datasets.
But some sample SfM codes provided public generate the SfM model with only the images.