DigThatData t1_iu4rr8y wrote
sometimes, usually more likely to bust something like this out if I have a specific need than it being part of my general process. simpler metrics like gradient magnitude often get the job done well enough.
in any event, it sounds like you're interested in the tooling space so here are a few projects I think are interesting, regardless of whether or not I use them myself:
- https://github.com/f-dangel/cockpit
- https://github.com/f-dangel/backpack
- https://github.com/f-dangel/vivit
- https://github.com/DistrictDataLabs/yellowbrick
- https://github.com/tomgoldstein/loss-landscape
- https://github.com/hila-chefer/Transformer-MM-Explainability
- https://github.com/pytorch/captum
- https://github.com/cleverhans-lab/cleverhans
- https://github.com/TorchDrift/TorchDrift
- https://github.com/MAIF/shapash
- https://github.com/uncertainty-toolbox/uncertainty-toolbox
- https://github.com/ropas/pytea
- https://github.com/oegedijk/explainerdashboard
- https://github.com/deepchecks/deepchecks
- https://github.com/Trusted-AI/AIX360
- https://github.com/delve-team/delve
- https://github.com/CalculatedContent/WeightWatcher
- https://github.com/archinetai/surgeon-pytorch
- https://github.com/xl0/lovely-tensors
jellyfishwhisperer t1_iu4twl9 wrote
Great list. To add, in the CV space you should be very careful with many "xai" methods. Usually they're just fancy edge detectors. Been Kim is pretty good on this stuff.
DigThatData t1_iue7pne wrote
very thought provoking stuff! I wonder if maybe an alternative interpretation of these observations might be something along the lines of deep image prior, i.e. maybe randomly initialized deep architectures are capable of performing edge detection just by virtue of how the gradient responds to the stacked operators?
jellyfishwhisperer t1_iuisl72 wrote
That's about right. Convolution priors in particular lend themselves to edge detection. CV xai is weird in general though so I've stepped back a bit. Is a good explanation one that looks good or one that is faithful to the model or what? Everyone disagrees. So Ive moved to inputs with interpretable features (text, tables, science, etc).
soulshakedown t1_iu5n8kc wrote
I haven't used it yet, but I really want to dig into WeightWatcher---I just listened to a really nice Practical AI podcast episode with its main contributor about this tool if anyone is interested: https://changelog.com/practicalai/194
DisWastingMyTime OP t1_iu52gcw wrote
Thank you for the well thought response, will look into those (or my team will ;) )
Borky_ t1_iu659bd wrote
Damn is there anything for us poor tf/keras users in there? :(
DigThatData t1_iu6zlbo wrote
i have tunnel vision on the pytorch ecosystem (with the occasional jax cameo)
Borky_ t1_iu8d8o2 wrote
yeah seems like you guys are getting all the fun toys recently, either way I'll save this post when I'm eventually forced to switch !
DigThatData t1_iue3h89 wrote
I think "recently" started about two years after pytorch was released.
Viewing a single comment thread. View all comments