Viewing a single comment thread. View all comments

Nameless1995 t1_j1xeo4l wrote

There is a literature related to taking gradient agreement/conflict into account for different motivations (usually different from the exact motivation in OP).

This is one place to start looking: https://arxiv.org/abs/2009.00329 (you can find some related work from the citations in google scholar/semantic scholar)

1

derpderp3200 OP t1_j1ygqtj wrote

What a fascinating paper- reminds me of an idea I had to store some sort of secondary value in weights that contribute to correct outputs that prevents unlearning their features, but had no specific idea of how to execute it- can't believe I didn't think of what this paper's authors did. Thank you.

1