Been poking around the site, is there an itemized ...
# general
c
Been poking around the site, is there an itemized list of the models supported for explainability analysis?
a
Hello James! For the feature importance monitoring in WhyLabs you should be able to log the data for any type of model that you can extract global feature importance for. You can get these vales from libraries like SHAP or others and then log them to WhyLabs. https://docs.whylabs.ai/docs/explainability/ Is that the feature you're looking at using?
c
so, basically, if the model works with SHAP (and maybe some others) it works with Why
a
If you can get the values in a dictionary format, it can be logged to WhyLabs. {'Feature_0': -3.330399790430435e-15, 'Feature_1': 12.44482785538977, 'Feature_2': -4.393883142916863e-14, 'Feature_3': -2.7047779443616894e-14,}
c
got it
a
We always love hearing about your use case if you're open to sharing!
c
anomaly detection w/ Isolation Forest. working spark / scala right now. Most explainability utils are on the python side. so might move over there for model training/inference
🙌 1