I projected the 32*32 image data (eigenface) onto the principal components using below function.
def projectData(X, U, K):
# Compute the projection of the data using only the top K eigenvectors in U (first K columns).
# X: data
# U: Eigenvectors
# K: your choice of dimension
new_U = U[:,:K]
return X.dot(new_U)
Let's say my K is 10. Now I have 100-dimensional data instead of 1024 (32*32). When I display the data I got below image.
Is this normal behavior? I'm following Andrew NG's ML class for PCA (https://github.com/kaleko/CourseraML/blob/master/ex7/ex7.ipynb) and noticed no tutorials in that class display the projected data itself, but they always display the reconstructed version (recover the data by projecting them back onto the original high dimensional space, then display)
My question is: How should I display the projected data? Is it even meaningful to display the projected data itself? Why do all other tutorials recover the data and then display?