Multilabel classification with ULMFiT in Fastai v1

As part of my Master thesis, I am working on a Theory Ontology Learning problem, which, among others, would use fastai and ULMFiT. This thesis is planned to be published in Information Systems Journal and the subject can be useful for many researchers since I plan to develop a model that would automatically predict the Research Method used in any scientific paper.

I already wrote about my problem 3 days ago using a toy example, where I used version 1.0.14 in this topic but I didn’t get any response yet.

I was looking for answers in other topics, ex. here, or here and I know that there were some bugs in versions before 1.0.15, which according to the Change Log are fixed by now. However, the code in the screenshots (subject of this PR) still doesn’t work for me, even after upgrade to 1.0.18.

  • Even though the labels are explicitly defined by n_labels argument and the loss function is defined correctly, <function torch.nn.functional.binary_cross_entropy_with_logits(input, target, weight=None, size_average=None, reduce=None, reduction='elementwise_mean', pos_weight=None)>, fitting the classifier fails with the ValueError: Target size (torch.Size([32])) must be the same as input size (torch.Size([32, 2]))
  • without disabling the accuracy metric this way learn.metrics = [], I am also getting an error like this one Expected object of scalar type Long but got scalar type Float for argument #2 'other'
  • Also, I am a bit confused about TextDataset vs. TextClasDataBunch - do I need to specify n_labels = 3 in both? And for TextLMDataBunch the dummy column filled with zeros is created under the hood or should I create it myself? should I then specify n_labels=0 or leave it out entirely?

Here is the code:




I also attach 2 PDFs, because somehow in 1.0.15 it was still working after using this hack with setting learn.metrics = []. First PDF with error, the second one uses the hack and it’s working.
error_multilabel example.pdf (147.4 KB)
working_multilabel example.pdf (136.8 KB)

I would appreciate any ideas on how to fix this. Also if somebody got Multilabel Classification running, it would be a great help to share your notebook. Thanks a lot!

2 Likes

I’m not sure if i can answer all your questions right now, but at least i can tell you where your error is coming from. When you create a classifier for a problem with only one class (like the imdb dataset), the head of your classifier has size 2. Your classifier outputs a score for being in the class and a score for not being in the class. Your labels array has only one column. There is a zero or a one at each iteration. When you use binary cross entropy with logits, you need to have your input the same shape as your output, so it is why it’s not working here. I think you should be using pytorch cross entropy in your case and it should be working fine.

The cas you presented in your notebook is not multilabel classification. If you want to do multilabel classification, then you can use BCE loss with logits. You’ll need to have your output the same shape as your input.

You are creating your data_clas object from the original csv files (since you are using TextClasDataBunch.from_csv) so this is why you’re getting this error.
Note that multi-label problems with the library haven’t been tested at all for now, so you should expect some more errors.

In this imdb notebook (see screenshots) I used your notebook from this post in which you created 2 extra dummy labels to create an example multilabel training set. I am aware that for a simple binary classification with 0 or 1 output, my last output layer would have 2 outputs, so torch.nn.BCELoss() would be suitable. Even though by default fastai uses this loss function: torch.nn.functional.cross_entropy() which uses Softmax instead of Sigmoid and thus can be used for any number of classes (Side note: not to confuse with number of labels).

For multilabel I used BCEWithLogitsLoss() which combines Sigmoid with BCE loss to take advantage of log-sum-exp trick for numerical stability:


here: m = nr of training examples, y = targets, y hat = predictions

So my problem is still unsolved.

1 Like

Thanks a lot for information about testing! I will then try it with the 0.7 version for now. I would appreciate if you could give us an update later, when the testing will be finished :slight_smile:

So as I understand, I shouldn’t use TextDataset at all, but only TextLMDataBuch and TextClasDataBunch, correct?

The easiest is to use the databunch classes, yes.

Hi,

Did you have any improvements concerning multi labels classification ? I am also interested in doing multi label classification tasks and would like to help if you have any github. Thanks.

I am not sure whether it will work atm, as it was not even tested yet with fastai v1. I think we should wait or use other library for now. If you want to try it yourself, you can use Toxic Comment competition dataset on Kaggle and share your notebook. Maybe together we can get it running somehow :slight_smile:

If you want to help with debugging, I share my mini example notebook with toxic comment dataset: https://github.com/anna-anisienia/ULMFiT_fastai/blob/master/05_Toxic_Comments_with_fastai.ipynb

Thanks in advance!

4 Likes

Hi Anna,

I am facing same issue with a multi-label classification problem. Were you able to resolve it?

I didn’t try it. The library is currently in development and it’s changing very fast :slight_smile: multilabel classification wasn’t tested yet by the developers. But if you try, make sure that you use the up-to-date syntax from the documentation. My notebook uses old version

1 Like

We’re thinking of unifying single vs multi class vs regression so there should be some changes to help you in the coming days. Then hopefully the API will stabilize :wink:

4 Likes

Thank you so much!!! I’m looking forward to it :slight_smile:

1 Like

I’m working on doing a regression problem with LM. Would it be better for me to wait for the new changes to come in?

We’re refactoring the data block API to be even more flexible, so yes, waiting for the end of the weekend should be best.

3 Likes

Hi @sgugger,

Is there any plan for releasing updated example notebooks inline with v1 for ULMFit ?
I’m new to ML Community and fastai.

It would be a great help, if can directly learn fastai v1.

Thank you.

I saw this PR. Does it mean that Multilabel text classification is now fixed? or will be soon? Thank you in advance!

If your labels aren’t one-hot encoded (e.g. a list of tags), it should work properly.

I tried it and got this error: RuntimeError: Expected object of scalar type Long but got scalar type Float for argument #2 'other'.

I think the model doesn’t handle the input and output for the metric correctly. I now use data in the following format (list of tags, not one-hot-encoded), where rm is a list of my labels:

The number of output features (targets) is correct, i.e. equal to my number of classes and the classes were also correctly identified in the classes.txt. This is model’s summary

And my code:

data_lm = TextLMDataBunch.from_df(path, train_df, valid_df, test_df, bs=48, text_cols = 'fulltext')
learn = language_model_learner(data_lm, pretrained_model = URLs.WT103)
learn.unfreeze()
learn.fit_one_cycle(1, 1e-2, moms = (0.8, 0.7))
learn.save_encoder('lm_encoder')

multilabel_data = TextClasDataBunch.from_df(path, train_df, valid_df, bs=32, 
                    text_cols = 'fulltext', label_cols ='rm', vocab=data_lm.train_ds.vocab)
multilabel_classifier = text_classifier_learner(multilabel_data)
multilabel_classifier.load_encoder('lm_encoder')
multilabel_classifier.freeze()
multilabel_classifier.fit_one_cycle(1, 1e-2, moms = (0.8,0.7)) 

How can I fix this error? Is this due to the wrong metric/loss? (I use defaults: accuracy, FlattenedLoss)

Here is the entire error log:

RuntimeError                              Traceback (most recent call last)
<ipython-input-16-ba28a844a782> in <module>()
      4 multilabel_classifier.load_encoder('lm_encoder')
      5 multilabel_classifier.freeze()
----> 6 multilabel_classifier.fit_one_cycle(1, 1e-2, moms = (0.8,0.7))

~/.local/lib/python3.6/site-packages/fastai/train.py in fit_one_cycle(learn, cyc_len, max_lr, moms, div_factor, pct_start, wd, callbacks, **kwargs)
     19     callbacks.append(OneCycleScheduler(learn, max_lr, moms=moms, div_factor=div_factor,
     20                                         pct_start=pct_start, **kwargs))
---> 21     learn.fit(cyc_len, max_lr, wd=wd, callbacks=callbacks)
     22 
     23 def lr_find(learn:Learner, start_lr:Floats=1e-7, end_lr:Floats=10, num_it:int=100, stop_div:bool=True, **kwargs:Any):

~/.local/lib/python3.6/site-packages/fastai/basic_train.py in fit(self, epochs, lr, wd, callbacks)
    164         callbacks = [cb(self) for cb in self.callback_fns] + listify(callbacks)
    165         fit(epochs, self.model, self.loss_func, opt=self.opt, data=self.data, metrics=self.metrics,
--> 166             callbacks=self.callbacks+callbacks)
    167 
    168     def create_opt(self, lr:Floats, wd:Floats=0.)->None:

~/.local/lib/python3.6/site-packages/fastai/basic_train.py in fit(epochs, model, loss_func, opt, data, callbacks, metrics)
     92     except Exception as e:
     93         exception = e
---> 94         raise e
     95     finally: cb_handler.on_train_end(exception)
     96 

~/.local/lib/python3.6/site-packages/fastai/basic_train.py in fit(epochs, model, loss_func, opt, data, callbacks, metrics)
     87             if hasattr(data,'valid_dl') and data.valid_dl is not None and data.valid_ds is not None:
     88                 val_loss = validate(model, data.valid_dl, loss_func=loss_func,
---> 89                                        cb_handler=cb_handler, pbar=pbar)
     90             else: val_loss=None
     91             if cb_handler.on_epoch_end(val_loss): break

~/.local/lib/python3.6/site-packages/fastai/basic_train.py in validate(model, dl, loss_func, cb_handler, pbar, average, n_batch)
     52             if not is_listy(yb): yb = [yb]
     53             nums.append(yb[0].shape[0])
---> 54             if cb_handler and cb_handler.on_batch_end(val_losses[-1]): break
     55             if n_batch and (len(nums)>=n_batch): break
     56         nums = np.array(nums, dtype=np.float32)

~/.local/lib/python3.6/site-packages/fastai/callback.py in on_batch_end(self, loss)
    237         "Handle end of processing one batch with `loss`."
    238         self.state_dict['last_loss'] = loss
--> 239         stop = np.any(self('batch_end', not self.state_dict['train']))
    240         if self.state_dict['train']:
    241             self.state_dict['iteration'] += 1

~/.local/lib/python3.6/site-packages/fastai/callback.py in __call__(self, cb_name, call_mets, **kwargs)
    185     def __call__(self, cb_name, call_mets=True, **kwargs)->None:
    186         "Call through to all of the `CallbakHandler` functions."
--> 187         if call_mets: [getattr(met, f'on_{cb_name}')(**self.state_dict, **kwargs) for met in self.metrics]
    188         return [getattr(cb, f'on_{cb_name}')(**self.state_dict, **kwargs) for cb in self.callbacks]
    189 

~/.local/lib/python3.6/site-packages/fastai/callback.py in <listcomp>(.0)
    185     def __call__(self, cb_name, call_mets=True, **kwargs)->None:
    186         "Call through to all of the `CallbakHandler` functions."
--> 187         if call_mets: [getattr(met, f'on_{cb_name}')(**self.state_dict, **kwargs) for met in self.metrics]
    188         return [getattr(cb, f'on_{cb_name}')(**self.state_dict, **kwargs) for cb in self.callbacks]
    189 

~/.local/lib/python3.6/site-packages/fastai/callback.py in on_batch_end(self, last_output, last_target, **kwargs)
    272         if not is_listy(last_target): last_target=[last_target]
    273         self.count += last_target[0].size(0)
--> 274         self.val += last_target[0].size(0) * self.func(last_output, *last_target).detach().cpu()
    275 
    276     def on_epoch_end(self, **kwargs):

~/.local/lib/python3.6/site-packages/fastai/metrics.py in accuracy(input, targs)
     37     input = input.argmax(dim=-1).view(n,-1)
     38     targs = targs.view(n,-1)
---> 39     return (input==targs).float().mean()
     40 
     41 def error_rate(input:Tensor, targs:Tensor)->Rank0Tensor:

RuntimeError: Expected object of scalar type Long but got scalar type Float for argument #2 'other'
2 Likes

You have to remove the accuracy that is hard-coded in RNNLearner by passing metrics=accuracy_thresh (suitable for multi-classification).
Check that learn.metrics is okay if you run in the issue again.

2 Likes