whamp
(Will)
March 30, 2019, 12:47am
1
I’m wondering what the best approach would be to redefine TabularModel to experiment with different activation functions.
I see nn.Relu used here in fastai.tabular.models :
emb_drop:float=0., y_range:OptRange=None, use_bn:bool=True, bn_final:bool=False):
super().__init__()
ps = ifnone(ps, [0]*len(layers))
ps = listify(ps, layers)
self.embeds = nn.ModuleList([embedding(ni, nf) for ni,nf in emb_szs])
self.emb_drop = nn.Dropout(emb_drop)
self.bn_cont = nn.BatchNorm1d(n_cont)
n_emb = sum(e.embedding_dim for e in self.embeds)
self.n_emb,self.n_cont,self.y_range = n_emb,n_cont,y_range
sizes = self.get_sizes(layers, out_sz)
actns = [nn.ReLU(inplace=True) for _ in range(len(sizes)-2)] + [None]
layers = []
for i,(n_in,n_out,dp,act) in enumerate(zip(sizes[:-1],sizes[1:],[0.]+ps,actns)):
layers += bn_drop_lin(n_in, n_out, bn=use_bn and i!=0, p=dp, actn=act)
if bn_final: layers.append(nn.BatchNorm1d(sizes[-1]))
self.layers = nn.Sequential(*layers)
def get_sizes(self, layers, out_sz):
return [self.n_emb + self.n_cont] + layers + [out_sz]
def forward(self, x_cat:Tensor, x_cont:Tensor) -> Tensor:
Maybe just grab the whole thing and dump it in my notebook and paste in new nn.activations?
As far as i can tell in the code there doesn’t appear to be a built in way to do this so i just wanted to confirm that before hacking away.
Thanks
2 Likes
Pomo
(Malcolm McLean)
March 30, 2019, 4:14am
2
I’d say hack away. I also do not see any built-in way.
I have done many similar hacks with CNN models. You can copy code and rewrite in the notebook. Or create the model, enumerate layers, and replace some of them. It all just works, thanks to PyTorch’s tracking of parameters.
1 Like