Using Random Forest to interpret feature importance

In one of Jeremy’s lectures, he said in order to find the best values of weight decay, he tried lots fo values of NN hyperparameters, and put the results into a random forest and used visualization techniques to find best hyperparameters, how can I recreate this process on my own?

3 Likes

I am not an expert but I am very interested in this topic, so I will try to give it a shot.

I would say that the process is twofold:

  • As with any supervised learning problem, the first step is to fit a model that matches a set of inputs and outputs, using Random Forest. In this case, the inputs (the independent variables “X”) to the Random Forest are the hyperparameters of the NN model and the outputs (the dependent variable “y”) of the Random Forest are any kind of performance metrics from the NN model, such as the loss, accuracy, speed of computation etc. In order to generate a dataset of inputs/outputs, you can just run your NN model using a range of hyperparameters values and record your metrics of interest.
  • Once you have fit a model using Random Forest, you can apply model interpretation methods, as taught by Jeremy in the machine learning course. For example, you can use feature importance to identify the hyperparameters having the most impact and/or partial dependence plots to visualize the values of NN hyperparameters leading to the best accuracy in the NN model, say.

So, the process is relatively easy I think, and I intend to apply it in the near future in my own work.

_
_
_

Note that the following remark is out of the scope of your question, so perhaps I’ll write a new post for this:

I wonder if there is a way to apply the same kind of interpretation methods inside PyTorch.

I mean, I know it is impossible to apply this for hyperparameters of a given NN model, since those are not an input of the NN model in the original dataset. But my goal is to find the optimal value of some inputs in the original dataset, after the NN model is trained (I am talking about a tabular kind of problem here).

In the machine learning course, I think Jeremy mentioned that it should be possible, given that PyTorch can provide the derivatives using automatic differentiation. But I haven’t been able to find any material describing this process inside PyTorch.

3 Likes

I wonder about this one also, would love to see how he arrived @ 2.4**4 using this method. He had mentioned that there were papers published on this, has anybody found them? Thanks

@jeremy

Sorry to get you involved in this, I did look around forum and didn’t find anything besides this post… So… Jeremiah, could you please (please) provide some samples of what you mentioned during your lecture as it pertain to this post? Thank you so much in advance!

I’m interested in reproducing this for my own learning as well. Has anyone else tried this?