Competitive_Dog_6639

Competitive_Dog_6639 t1_j30hjmd wrote

The situation you are describing is not possible in theory. If a matrix is PSD and invertible, it must be positive definite. And the inverse of a positive definite matrix is also positive definite, which means it must only yield positive Mahalonois distances (or zero if the vectors are identical). https://math.stackexchange.com/questions/2288067/inverse-of-a-symmetric-positive-definite-matrix

In practice, this might happen due to small eigenvalues and numerical error. The easiest fix is to add the identity scaled by a small constant, like in ridge regression, as others suggest

3

Competitive_Dog_6639 t1_ivfrjh9 wrote

My take: the prior work mentioned doesnt undermine the main claims of the paper, which is that without retraining one can find permutations to map nets to the same basin.

The objection to this point is raise in part C) by the commenter, where an appeal is made to the commenters own paper. I read that and didn't see explicit ideas related to connected modes in the commenters paper. Plus, the commenters paper retains the nets, which is against the main idea of git reason. While the ideas of mode connectivity may be latent, they are not mentioned at all. Why is it the job of the git rebasin authors to dig so deeply into one out of thousands of related paper out there to give the commenter credit for an idea that isn't even explicitly discussed? Would also like to point out that the commenters paper might be missing related references to things like SWA, so maybe nobodys perfect?

Even if many of the methods come from previous work, I dont see anything to undermine the central claim of git rebasin, and for me that's that, it's an original and important idea. Could it use better relations to previous work? Sure.

7