I’m trying to build a regression network that has 16 outputs with one of the 16 outputs weighted 3 times as high (or X times as high in the general case) for loss purposes as the other 15 outputs. I have built a network that works for the 16 outputs when they are all equal weighted, but how would I go about up-weighting one of the outputs above the others within the fastai library? I feel like there should be a simple way of doing this that I’m not thinking of.
I’ve tried this ugly hack and it didn’t work:
bs = 250
data = ColumnarModelData.from_data_frame(PATH, val_idxs= val_idxs, df= X, y= y_trn, cat_flds= cat_feats, bs= bs,
is_multi=False, is_reg=True,test_df= X_tst,shuffle=True)
m = data.get_learner(emb_szs=emb_szs, n_cont=len(contin_feats)+len(mom_feats)+len(fact_feats),emb_drop=0.3,out_sz=16,
szs= [1024,1024], drops= [0.0,0.5], y_range= y_range)
weight = torch.ones(16)
weight[0] = 3
m.crit = nn.MSELoss(reduce=False)*weight
which throws an error that basically says hey, mseloss isn’t something you can multiply by integers or floats which makes sense after reading this thread things need to be wrapped in a torch variable and i need to use torch.mul() instead of just *
so then I went deeper into the fast.ai library to see where the loss function is being called and it looks like it’s here for structured data in column_data.py:
def _get_crit(self, data): return F.mse_loss if data.is_reg else F.binary_cross_entropy if data.is_multi else F.nll_loss
So the code is calling F.mse_loss for regression. I tried changing my code to:
m.crit = torch.mul(F.mse_loss(),weight)
but I couldn’t get it to work here either.
I then went into the pytorch source code for F.mse_loss and tried adding in a multiplcation by weight but that didn’t work either. I feel like I’m chasing my tail here, can someone help point me in the right direction?