User Question: Calculating Feature Importance
Table of contents
User question: How do you calculate feature importance (i.e., how do you infer the influence that each feature has on the model’s output) in a way that, undoubtedly, represents the model’s reasoning?
We’re still iterating on this. For the current implementation, the following steps are important.
If you have a way to understand—for a specific prediction—how certain the prediction is, then what you can do is try to get to that prediction without showing some specific features and do that for every feature. You’ll then get a measurement of how much your accuracy metric or your degree of certainty changes as you obfuscate each of the features. This will give you a measurement of how important that specific feature is for that prediction.
You would like to understand when you show a feature or not, how does the predicted variable shift. How much does that contribute to the value (for example, if you’re trying to predict a numerical value, you can determine how much that changes the prediction).
You can then try to iterate a few sigmas one way or another so that you can know how stable the prediction is based on that specific feature.
The combination of these three things will allow you to understand how important they are as well as how much they contribute. You can do this with any machine learning model. What we understand now from self-aware neural networks is:
If your neural networks are probabilistic (in the sense that — as opposed to weights — you have weight distributions, as you train the model). If the variances of these weights reduce, that means that your model is more certain about having found a global minima. If you try to do the same training by obfuscating one of the features and that sigma grows that means that your model is less likely to learn a global minima without that variable and that also gives you, globally, a degree of certainty that specific feature adds to the model itself.
Lastly, we have developed what we call self-aware neural networks, where what you do is to predict both the variable and error. As such, you can understand by showing a specific feature or not, how much that predicted error changes and that’ll give you another heuristic of the importance of that variable either globally or for this specific prediction.
Those are the tools we grasp on to build feature importance as well as contributions, or what we call force vector stores, at a specific value.