Submitted by Visual-Arm-7375 t3_y1zg5r in MachineLearning
graphicteadatasci t1_is4o6c9 wrote
Reply to comment by TenaciousDwight in [P] Understanding LIME | Explainable AI by Visual-Arm-7375
Well yeah, LIME tells you about an existing model, right? So if multiple features are correlated then a model may drop one of the features and the explanations will say that the drop model has no predictive power while the correlated feature is important. But we can drop the important feature and train an equally good model (maybe even better).
TenaciousDwight t1_is578zw wrote
I think the paper is saying that LIME may explain a model's prediction using features that are actually of little consequence to the model. I have a feeling that this is tied to the instability problem: do 2 runs of LIME to explain the same point and get 2 significantly different explanations.
Visual-Arm-7375 OP t1_is4vqus wrote
But is this LIME's problem? I mean, it is the model that is not taking into account the correlated feature, not LIME. LIME just looks at the original model.
Viewing a single comment thread. View all comments