Rossman Notebook Lesson 3 - get output from next to last linear layer

I have been studying the Rossmann notebook and watched the associated video several times, however I am slightly confused on how to make a change. The neural net and Random Forest models create predictions completely independent of each other as far as I can tell. What I am trying to do is train the neural network model, then drop the last layer that only outputs one number (in my code at least) and have the model output a vector of 500 (or whatever the previous linear layer outputs). I would like to train the random forest on these 500 features instead of the original train data. I imagine this shouldn’t be too complicated, but I have been googling for hours on this one. I tried using the hooks to simply get the values of the next to last linear layer in addition to the single output, but I can’t seem to get that to work either.

Thanks,

Bob

I believe they should be independent of one another - it’s just demonstrating another way to build a model.

Definitely could be interesting. You could down the down the hook route, but if you’re just after the values, essentially you’re just doing a predict.
m = learner.model
layer_list = list(m.children())
cut_model = nn.Sequential(*layer_list[:-1]) # you might have to play around with the cut
cut_model.eval()
res = []
for x,y in iter(learner.model.trn_dl):
res += cut_model(V(x))
res = torch.cat(res,0)
res = to_np(res)

Is that the sort of thing you were trying to do? NB the code above hasn’t been tested, so you’ll probably have to debug it and adapt it

2 Likes