Submitted by Visual-Arm-7375 t3_y1zg5r in MachineLearning
TenaciousDwight t1_is3vui6 wrote
LIME has a lot of problems and I think it is worth mentioning more of them. As an example, this paper shows that the top features in a LIME explanation of an outcome are often neither necessary nor sufficient to cause that outcome.
graphicteadatasci t1_is4o6c9 wrote
Well yeah, LIME tells you about an existing model, right? So if multiple features are correlated then a model may drop one of the features and the explanations will say that the drop model has no predictive power while the correlated feature is important. But we can drop the important feature and train an equally good model (maybe even better).
TenaciousDwight t1_is578zw wrote
I think the paper is saying that LIME may explain a model's prediction using features that are actually of little consequence to the model. I have a feeling that this is tied to the instability problem: do 2 runs of LIME to explain the same point and get 2 significantly different explanations.
Visual-Arm-7375 OP t1_is4vqus wrote
But is this LIME's problem? I mean, it is the model that is not taking into account the correlated feature, not LIME. LIME just looks at the original model.
Visual-Arm-7375 OP t1_is4vmw7 wrote
Damn! The paper is really interesting u/TenaciousDwight! Thanks :) Appreciate it.
Viewing a single comment thread. View all comments