Submitted by olmec-akeru t3_z6p4yv in MachineLearning
olmec-akeru OP t1_iy7ai6s wrote
Reply to comment by new_name_who_dis_ in [D] What method is state of the art dimensionality reduction by olmec-akeru
>beauty of the PCA reduction was that one dimension was responsible for the size of the nose
I don't think this always holds true. You're just lucky that your dataset contains confined variation such that the eigenvectors represent this variance to a visual feature. There is no mathematical property of PCA that makes your statement true.
There have been some attempts to formalise something like what you have described. The closest I've seen is the beta-VAE: https://lilianweng.github.io/posts/2018-08-12-vae/
new_name_who_dis_ t1_iy84a83 wrote
It’s not really luck. There is variation in sizes of noses (it’s one of the most varied features of the face) and so that variance is guaranteed to be represented in the eigenvectors.
And beta-VAEs are one of the possible things you can try to get a disentangled latent space yes, although they don’t really work that well in my experience.
olmec-akeru OP t1_iy8ajq0 wrote
> the beauty of the PCA reduction was that one dimension was responsible for the size of the nose
You posit that an eigenvector will represent the nose when there are meaningful variations of scale, rotation, and position?
This is very different to saying all variance will be explained across the full set of eigenvectors (which very much is true).
new_name_who_dis_ t1_iy8b0jr wrote
It was just an example. Sure not all sizes of nose are found along the same eigenvector.
Viewing a single comment thread. View all comments