deepestdescent
deepestdescent t1_j1b720n wrote
Out of interest, how many discrete states do you end up with? Surely this blows out as you explore the state space?
deepestdescent OP t1_ixwbq8p wrote
deepestdescent OP t1_ixvz5mi wrote
Reply to comment by memberjan6 in [D] Alternatives to the shap explainability package by deepestdescent
I can’t really help the current shap project if the PR never gets looked at. In fact, most of the issues I want fixed already have open PRs associated with them. I may look into starting my own fork, but I just wanted to see whether the community had moved on to some other library.
deepestdescent OP t1_ixtzv44 wrote
Reply to comment by -xylon in [D] Alternatives to the shap explainability package by deepestdescent
I agree, but it’s pretty hard to contribute if issues and PRs never get looked at. I’m considering making a fork of shap. I just wanted to check whether people had moved on to some other library. From the comments, it appears not.
deepestdescent OP t1_ixsnjio wrote
Reply to comment by ckatem in [D] Alternatives to the shap explainability package by deepestdescent
It would be good if Microsoft could give him time to maintain shap. It’s such an important resource for the ML community.
deepestdescent OP t1_ixsnb2v wrote
Reply to comment by deepestdescent in [D] Alternatives to the shap explainability package by deepestdescent
Yep so InterpretML does use the unmaintained shap library unfortunately. Looks like the author of shap works for Microsoft though so maybe he also works on InterpretML? I just don’t understand why shap isn’t being actively maintained since so many projects rely on it.
deepestdescent OP t1_ixsmv55 wrote
Reply to comment by WERE_CAT in [D] Alternatives to the shap explainability package by deepestdescent
I want a general black box library like shap. Lgbm is tied to gradient boosted trees.
deepestdescent OP t1_ixsms55 wrote
Reply to comment by deepestdescent in [D] Alternatives to the shap explainability package by deepestdescent
Actually ELI5 is no good for me either. Hasn’t been updated for years.
deepestdescent OP t1_ixsmhl9 wrote
Reply to comment by DigThatData in [D] Alternatives to the shap explainability package by deepestdescent
Other than captum, all of those libraries rely on the unmaintained shap library.
deepestdescent OP t1_ixsfp64 wrote
Reply to comment by nicolas-gervais in [D] Alternatives to the shap explainability package by deepestdescent
I’ll look into ELI5. Treeinterpreter isn’t really general enough for me. I like that shap can work with any model.
deepestdescent OP t1_ixs7uig wrote
Reply to comment by Hydreigon92 in [D] Alternatives to the shap explainability package by deepestdescent
Thank you for that. Does it use the shap library under the hood, or does it have its own implementation for computing Shapley values? If it is using the shap, then I wouldn’t really call this an alternative as it is using the same unmaintained backend.
Submitted by deepestdescent t3_z4oxq5 in MachineLearning
deepestdescent t1_j67iajc wrote
Reply to [D] ImageNet2012 Advice by MyActualUserName99
I use PyTorch data loaders to load batches into memory in the background. I believe TensorFlow has similar functionality with tf.data. This should make your data loading speed basically negligible if you have a few CPU cores lying around.