Structured Learner

I don’t see EmbeddingModel in the current code for fast.ai on github, was it renamed to MixedInputModel?

Yes that is correct. Embedding model’s is what I named it for this purpose. Original model name was always MixedInputModel (conts + cats)

3 Likes

@kcturgutlu can you share your notebook of which you’ve taken the screenshot? I tried following along but get an error in lr_find():

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-50-d81c6bd29d71> in <module>()
----> 1 learn.lr_find()

~/anaconda3/envs/fastai/lib/python3.6/site-packages/fastai/learner.py in lr_find(self, start_lr, end_lr, wds)
    135         layer_opt = self.get_layer_opt(start_lr, wds)
    136         self.sched = LR_Finder(layer_opt, len(self.data.trn_dl), end_lr)
--> 137         self.fit_gen(self.model, self.data, layer_opt, 1)
    138         self.load('tmp')
    139 

~/anaconda3/envs/fastai/lib/python3.6/site-packages/fastai/learner.py in fit_gen(self, model, data, layer_opt, n_cycle, cycle_len, cycle_mult, cycle_save_name, metrics, callbacks, **kwargs)
     87         n_epoch = sum_geom(cycle_len if cycle_len else 1, cycle_mult, n_cycle)
     88         fit(model, data, n_epoch, layer_opt.opt, self.crit,
---> 89             metrics=metrics, callbacks=callbacks, reg_fn=self.reg_fn, clip=self.clip, **kwargs)
     90 
     91     def get_layer_groups(self): return self.models.get_layer_groups()

~/anaconda3/envs/fastai/lib/python3.6/site-packages/fastai/model.py in fit(model, data, epochs, opt, crit, metrics, callbacks, **kwargs)
     82         for (*x,y) in t:
     83             batch_num += 1
---> 84             loss = stepper.step(V(x),V(y))
     85             avg_loss = avg_loss * avg_mom + loss * (1-avg_mom)
     86             debias_loss = avg_loss / (1 - avg_mom**batch_num)

~/anaconda3/envs/fastai/lib/python3.6/site-packages/fastai/model.py in step(self, xs, y)
     38     def step(self, xs, y):
     39         xtra = []
---> 40         output = self.m(*xs)
     41         if isinstance(output,(tuple,list)): output,*xtra = output
     42         self.opt.zero_grad()

~/anaconda3/envs/fastai/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
    323         for hook in self._forward_pre_hooks.values():
    324             hook(self, input)
--> 325         result = self.forward(*input, **kwargs)
    326         for hook in self._forward_hooks.values():
    327             hook_result = hook(self, input, result)

<ipython-input-40-d4690f427dbe> in forward(self, x_cat, x_cont)
    134 
    135     def forward(self, x_cat, x_cont):
--> 136         x = [emb(x_cat[:, i]) for i, emb in enumerate(self.embs)]  # takes necessary emb vectors
    137         x = torch.cat(x, 1)  ## concatenate along axis = 1 (columns - side by side) # this is our input from cats
    138         x = self.emb_drop(x)  ## apply dropout to elements of embedding tensor

<ipython-input-40-d4690f427dbe> in <listcomp>(.0)
    134 
    135     def forward(self, x_cat, x_cont):
--> 136         x = [emb(x_cat[:, i]) for i, emb in enumerate(self.embs)]  # takes necessary emb vectors
    137         x = torch.cat(x, 1)  ## concatenate along axis = 1 (columns - side by side) # this is our input from cats
    138         x = self.emb_drop(x)  ## apply dropout to elements of embedding tensor

~/anaconda3/envs/fastai/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
    323         for hook in self._forward_pre_hooks.values():
    324             hook(self, input)
--> 325         result = self.forward(*input, **kwargs)
    326         for hook in self._forward_hooks.values():
    327             hook_result = hook(self, input, result)

~/anaconda3/envs/fastai/lib/python3.6/site-packages/torch/nn/modules/sparse.py in forward(self, input)
    101             input, self.weight,
    102             padding_idx, self.max_norm, self.norm_type,
--> 103             self.scale_grad_by_freq, self.sparse
    104         )
    105 

~/anaconda3/envs/fastai/lib/python3.6/site-packages/torch/nn/_functions/thnn/sparse.py in forward(cls, ctx, indices, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
     55 
     56         if indices.dim() == 1:
---> 57             output = torch.index_select(weight, 0, indices)
     58         else:
     59             output = torch.index_select(weight, 0, indices.view(-1))

TypeError: torch.index_select received an invalid combination of arguments - got (torch.FloatTensor, int, torch.cuda.LongTensor), but expected (torch.FloatTensor source, int dim, torch.LongTensor index)
2 Likes

Hello, I tried to follow the steps described by @kcturgutlu and I modified the MixedInputModel and StructuredLearner classes in column_data.py, but I get the same error as @rohitgeo . Has anyone succesfully implemented binary classification in structured data? Here is my implementation. I tried using “y” both as a 1D vector and as a one-hot vector.

I’ve scoured through the forums and the net and no one has been able to do this. @kcturgutlu apparently was able to do this but he was using PyTorch directly. His github repo has an example, but it’s been modified to do regression, not classification.

1 Like

Hello,

Since there are many requests about a clarification on how to run classification models using MixedInputModel model in FAST.AI I got excited and prepared a fresh notebook on how to do it using https://www.kaggle.com/c/avazu-ctr-prediction/data as our case.

I was very curious to see how embeddings would perform on such a problem where the winners used FFM which is basically another way of representing categorical data (but including interactions). I don’t know mathematical relations between NN embedddings and FFMs but definitely dig that deeper tomorrow. In meanwhile here is a good read on FFMs https://www.analyticsvidhya.com/blog/2018/01/factorization-machines/.

I’ve created this notebook where you can access from https://github.com/KeremTurgutlu/deeplearning/blob/master/avazu/FAST.AI%20Binary%20Classification%20-%20Kaggle%20Avazu%20CTR.ipynb )

I commented every important part you need to know, e.g hacks we are using, why we are doing so, what else can be done with FAST.AI, and so on. As you dig into source code you will see how flexible it is. I understand why @jeremy didn’t implement a classifier necessarily since it’s very easy to make the changes.

What we do in the notebook in summary:

  • we don’t touch MixedInputModel at all
  • We change single line in ColumnarDataset to play good along with torch cross entropy
  • We change crit of learn object to F.cross_entropy and that’s it :slight_smile: (of course if you are interested in ranking probabilities you can either use AUC or Gini).

Hope this helps

11 Likes

Full Fast.ai code is in the repo I’ve shared

1 Like

Thank you, Kerem!

I’m trying out @kcturgutlu’s notebook with the Avazu data, and getting this errorin lr_find():

TypeError: torch.index_select received an invalid combination of arguments - got (torch.FloatTensor, int, torch.cuda.LongTensor), but expected (torch.FloatTensor source, int dim, torch.LongTensor index)

Here’s the full stack trace:

<ipython-input-34-d81c6bd29d71> in <module>()
----> 1 learn.lr_find()

~/anaconda3/envs/fastai/lib/python3.6/site-packages/fastai/learner.py in lr_find(self, start_lr, end_lr, wds)
    135         layer_opt = self.get_layer_opt(start_lr, wds)
    136         self.sched = LR_Finder(layer_opt, len(self.data.trn_dl), end_lr)
--> 137         self.fit_gen(self.model, self.data, layer_opt, 1)
    138         self.load('tmp')
    139 

~/anaconda3/envs/fastai/lib/python3.6/site-packages/fastai/learner.py in fit_gen(self, model, data, layer_opt, n_cycle, cycle_len, cycle_mult, cycle_save_name, metrics, callbacks, **kwargs)
     87         n_epoch = sum_geom(cycle_len if cycle_len else 1, cycle_mult, n_cycle)
     88         fit(model, data, n_epoch, layer_opt.opt, self.crit,
---> 89             metrics=metrics, callbacks=callbacks, reg_fn=self.reg_fn, clip=self.clip, **kwargs)
     90 
     91     def get_layer_groups(self): return self.models.get_layer_groups()

~/anaconda3/envs/fastai/lib/python3.6/site-packages/fastai/model.py in fit(model, data, epochs, opt, crit, metrics, callbacks, **kwargs)
     82         for (*x,y) in t:
     83             batch_num += 1
---> 84             loss = stepper.step(V(x),V(y))
     85             avg_loss = avg_loss * avg_mom + loss * (1-avg_mom)
     86             debias_loss = avg_loss / (1 - avg_mom**batch_num)

~/anaconda3/envs/fastai/lib/python3.6/site-packages/fastai/model.py in step(self, xs, y)
     38     def step(self, xs, y):
     39         xtra = []
---> 40         output = self.m(*xs)
     41         if isinstance(output,(tuple,list)): output,*xtra = output
     42         self.opt.zero_grad()

~/anaconda3/envs/fastai/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
    323         for hook in self._forward_pre_hooks.values():
    324             hook(self, input)
--> 325         result = self.forward(*input, **kwargs)
    326         for hook in self._forward_hooks.values():
    327             hook_result = hook(self, input, result)

<ipython-input-2-dd760a043ee0> in forward(self, x_cat, x_cont)
     24     def forward(self, x_cat, x_cont):
     25         if self.n_emb != 0:
---> 26             x = [e(x_cat[:,i]) for i,e in enumerate(self.embs)]
     27             x = torch.cat(x, 1)
     28             x = self.emb_drop(x)

<ipython-input-2-dd760a043ee0> in <listcomp>(.0)
     24     def forward(self, x_cat, x_cont):
     25         if self.n_emb != 0:
---> 26             x = [e(x_cat[:,i]) for i,e in enumerate(self.embs)]
     27             x = torch.cat(x, 1)
     28             x = self.emb_drop(x)

~/anaconda3/envs/fastai/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
    323         for hook in self._forward_pre_hooks.values():
    324             hook(self, input)
--> 325         result = self.forward(*input, **kwargs)
    326         for hook in self._forward_hooks.values():
    327             hook_result = hook(self, input, result)

~/anaconda3/envs/fastai/lib/python3.6/site-packages/torch/nn/modules/sparse.py in forward(self, input)
    101             input, self.weight,
    102             padding_idx, self.max_norm, self.norm_type,
--> 103             self.scale_grad_by_freq, self.sparse
    104         )
    105 

~/anaconda3/envs/fastai/lib/python3.6/site-packages/torch/nn/_functions/thnn/sparse.py in forward(cls, ctx, indices, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
     55 
     56         if indices.dim() == 1:
---> 57             output = torch.index_select(weight, 0, indices)
     58         else:
     59             output = torch.index_select(weight, 0, indices.view(-1))

TypeError: torch.index_select received an invalid combination of arguments - got (torch.FloatTensor, int, torch.cuda.LongTensor), but expected (torch.FloatTensor source, int dim, torch.LongTensor index)

Anyone got it to work? Does it have something to do with the version of pytorch? I’m using ‘0.3.0.post4’

1 Like

I ran everything on cpu, you need to either run on cpu or put variables and model into gpu.

FYI. It’s probably a good idea to search for similar errors on the forum or google before asking, most of these are issues are already discussed.

This is looking great @kcturgutlu ! Let me know when you’ve got something more polished since I’d love to be able to share this work widely :slight_smile: FYI your nb link above is a 404. Correct link seems to be https://github.com/KeremTurgutlu/deeplearning/blob/master/avazu/FAST.AI%20Binary%20Classification%20-%20Kaggle%20Avazu%20CTR.ipynb

2 Likes

Thanks for the reminder, I’ve changed the link. I am working on DSBOWL 2018 and USCF simultaneously right now since task is very similar:) But I will probably be able to optimize the work and polish it as you recommend in couple of days and let you know. Thank you so much !

SIDE NOTE: I didn’t realize how computationally expensive encoder decoder CNNs are before actually running one :slight_smile:

1 Like

Thanks @kcturgutlu!! I’ll definitely try it!

I finally had time to update the notebook, here is the link https://github.com/KeremTurgutlu/deeplearning/blob/master/avazu/FAST.AI%20Classification%20-%20Kaggle%20Avazu%20CTR.ipynb. Sorry for late reply :slight_smile:

6 Likes

Thank you! Could you tell me how to send class weights to the loss function?

I tried the following after reviewing the documentation with no success. I don’t think passing input and target values are possible/useful

----> 5 learn.crit = F.cross_entropy(weight=[.1,.99])
      6 learn.crit

TypeError: cross_entropy() missing 2 required positional arguments: 'input' and 'target'

Can you point me towards the right place to set the weights to overcome class imbalance issues?

1 Like

Getting RuntimeError: cuda runtime error (59) : device-side assert triggered at /opt/conda/conda-bld/pytorch_1518244421288/work/torch/lib/THC/THCCachingHostAllocator.cpp:258

Been struggling with this for quite long now while doing learn.lr_find()
. Could you please help.

Looking in the logs for jupyter shows that :
block: [0,0,0], thread: [0,0,0] Assertion srcIndex < srcSelectDimSize failed.

in forward(self, x_cat, x_cont)
26 if self.n_emb != 0:
27 x = [e(x_cat[:,i]) for i,e in enumerate(self.embs)]
—> 28 x = torch.cat(x, 1)
29 x = self.emb_drop(x)
30 if self.n_cont != 0:

2 Likes

Hey, Thanks for sharing the link.

I was following your notebook for the classification task. i’m getting this error. Can you please help me figure out what could be the reason?



Do you have x_conts ? Can you try to access it through trn_ds and show what you are getting for x_cont

No. there isn’t any continuous variable in the data. All the categorical. I’m participating in this competition:

Please pull my latest notebook problem is with batchnorm, you don’t have the condition if self.n_cont != 0.

Correct Model Class:

class MixedInputModel(nn.Module):
    def __init__(self, emb_szs, n_cont, emb_drop, out_sz, szs, drops,
                 y_range=None, use_bn=False):
        super().__init__()
        self.embs = nn.ModuleList([nn.Embedding(c, s) for c,s in emb_szs])
        for emb in self.embs: emb_init(emb)
        n_emb = sum(e.embedding_dim for e in self.embs)
        self.n_emb, self.n_cont=n_emb, n_cont
        
        szs = [n_emb+n_cont] + szs
        self.lins = nn.ModuleList([
            nn.Linear(szs[i], szs[i+1]) for i in range(len(szs)-1)])
        self.bns = nn.ModuleList([
            nn.BatchNorm1d(sz) for sz in szs[1:]])
        for o in self.lins: kaiming_normal(o.weight.data)
        self.outp = nn.Linear(szs[-1], out_sz)
        kaiming_normal(self.outp.weight.data)

        self.emb_drop = nn.Dropout(emb_drop)
        self.drops = nn.ModuleList([nn.Dropout(drop) for drop in drops])
        self.bn = nn.BatchNorm1d(n_cont)
        self.use_bn,self.y_range = use_bn,y_range

    def forward(self, x_cat, x_cont):
        if self.n_emb != 0:
            x = [e(x_cat[:,i]) for i,e in enumerate(self.embs)]
            x = torch.cat(x, 1)
            x = self.emb_drop(x)
        if self.n_cont != 0:
            x2 = self.bn(x_cont)
            x = torch.cat([x, x2], 1) if self.n_emb != 0 else x2
        for l,d,b in zip(self.lins, self.drops, self.bns):
            x = F.relu(l(x))
            if self.use_bn: x = b(x)
            x = d(x)
        x = self.outp(x)
        if self.y_range:
            x = F.sigmoid(x)
            x = x*(self.y_range[1] - self.y_range[0])
            x = x+self.y_range[0]
        return x

And let me now how it scores on LB :wink:

One more thing you can do is to use Factorization Machines and compare it with embeddings mehtod. Use https://www.csie.ntu.edu.tw/~r01922136/libffm/

4 Likes