I'm looking to create an animated series of photos that are as similar as possible. While researching, I've come across two methods:
- Generate a pHash of the images and do a nearest neighbor using the Hamming distance of the hash.
- Create color histograms and do an n-dimensional nearest neighbor using Euclidian distance.
Many people who have been commenting for the https://stackoverflow.com/questions/6971966/how-to-measure-percentage-similarity-between-two-images question claim that the two processes are essentially the same. I'm looking for a little more insight on this. They seem like different processes.
Thoughts?