Viewing a single comment thread. View all comments

jellyfishwhisperer t1_iu4twl9 wrote

Great list. To add, in the CV space you should be very careful with many "xai" methods. Usually they're just fancy edge detectors. Been Kim is pretty good on this stuff.

https://arxiv.org/abs/1810.03292

18

DigThatData t1_iue7pne wrote

very thought provoking stuff! I wonder if maybe an alternative interpretation of these observations might be something along the lines of deep image prior, i.e. maybe randomly initialized deep architectures are capable of performing edge detection just by virtue of how the gradient responds to the stacked operators?

1

jellyfishwhisperer t1_iuisl72 wrote

That's about right. Convolution priors in particular lend themselves to edge detection. CV xai is weird in general though so I've stepped back a bit. Is a good explanation one that looks good or one that is faithful to the model or what? Everyone disagrees. So Ive moved to inputs with interpretable features (text, tables, science, etc).

2

DisWastingMyTime OP t1_iu52gcw wrote

Thank you for the well thought response, will look into those (or my team will ;) )

2

Borky_ t1_iu659bd wrote

Damn is there anything for us poor tf/keras users in there? :(

2

DigThatData t1_iu6zlbo wrote

i have tunnel vision on the pytorch ecosystem (with the occasional jax cameo)

2

Borky_ t1_iu8d8o2 wrote

yeah seems like you guys are getting all the fun toys recently, either way I'll save this post when I'm eventually forced to switch !

1

DigThatData t1_iue3h89 wrote

I think "recently" started about two years after pytorch was released.

1