Understanding model predictions
withLIME
In my previous post on model interpretability, I provided an overview
of common techniques used to investigate machine learning models.
In this blog post, I will provide a more thorough explanation of
LIME.
Why is it necessary to understand
interpretability methods?
If you trust a technique with explaining the predictions of your
model, it is important to understand the underlying mechanics of
that technique, and any potential pitfalls associated with it.
Interpretability techniques are not fault proof, and without a good
understanding of the method, you are very likely to base your
assumptions on falsehoods.
A similar but significantly more thorough investigation was done in
the following blog post on random forest importance’s. Feature
importance is often used to determine which features play an
important role in the model predictions. Random forests provide an
out-of-the-box method to determine the most important features in
the dataset and a lot of people rely on these feature importance's,
interpreting them as a ‘ground truth explanation’ of the dataset.
评论0