Submitted by 51616 t3_yt6slt in MachineLearning
advstra t1_iw4in0h wrote
Reply to comment by TheLastVegan in Relative representations enable zero-shot latent space communication by 51616
People are making fun of you but this is exactly how CS papers sound (literally the first sentence of the abstract: Neural networks embed the geometric structure of a data manifold lying in a high-dimensional space into latent representations.). And from what I could understand more or less you actually weren't that far off?
TheLastVegan t1_iw5a72q wrote
I was arguing that the paper's proposal could improve scaling by addressing the symptoms of lossy training methods, and suggested that weighted stochastics can already do this with style vectors.
advstra t1_iw6n49p wrote
So in the paper from a quick skim read they're suggesting a new method for data representation (pairwise similarities), and you suggest adding style vectors (which is another representation method essentially as far as I know) can improve it for multimodal tasks? I think that makes sense, reminds me of contextual word embeddings if I didn't misunderstand anything.
Viewing a single comment thread. View all comments