I have an application. It has two activities, one is a custom camera for taking pictures and another is to show that picture.
My requirement is that whenever I take a picture this picture should be displayed according to the orientation of the device at the time the picture was taken and according to the current orientation of the device. You may have seen a similar feature in panorama applications.
For example, suppose I take a picture of a view and I move my phone, then I need to display the same picture at an angle. How can I do this?
You should first estimate the device orientation. This is best achieved using sensor fusion, see the following tutorial: http://www.thousand-thoughts.com/2012/03/android-sensor-fusion-tutorial/. The orientation shall be expressed as a matrix or a quaternion. Prefer quaternions if you have the choice as the algorithms shall run faster.
For the rest of your application, let's use a 3D engine to display the photos you have already taken. You have the choice for 3D engines, take the one you are the most familiar with. We are going to display the photos already taken as textured rectangles in the 3D engine.
First we have to determine the size of these textured rectangles. We have to chose the distance at which the rectangles are displayed: this is arbitrary, let's take 1 meter. Then we need to know the horizontal and vertical fields of view angle of the camera, this varies depending on the device. For the sake of this explanation let's imagine we use a 60 degrees horizontal and vertical field of view (square) camera. The photos are equivalent to 1m x 1m squares 1m away from the observer. This is simple trigonometry, I hope you can visualise it. Let me know if you need more explanations.
We can calculate mathematically the coordinates of the rectangle when the orientation angle is null. In our previous example, the square would have the coordinates (+0.5, +0.5), (+0.5, -0.5), (-0.5, -0.5), (-0.5, +0.5). Every time we take a photo we shall calculate its coordinates rotating the null-angle coordinates by the orientation quaternion/matrix. We can then add to the 3D engine a textured rectangle at these coordinates and with the photo as a texture.
All we need to do now is to update the orientation of the camera as the device orientation changes. This is done via the 3D engine, probably rotating all the registered objects with the opposite of the orientation rotation quaternion/matrix, but this really depends on your API. You can also overlay the current camera image if you want.
In the above explanation I took a square camera with a 60 degrees horizontal and vertical field of view to help you visualise what happens. The mathematical formula for the null-angle rectangle coordinates is (X, Y), (X, -Y), (-X, -Y), (-X, Y) where X = sin(X_angle) and Y = sin(Y_angle).
I didn't include any code because I don't want to bias your technology choices. But with the above explanation you should be able to implement what you want in your chosen language and libraries. Please let us know how it goes.