Part 2 Lesson 10 wiki

Hi, I have a script to predict a sentence using ulmfit:


You just need to change the path to the model name accordingly.

I tried also to use an experimental beam search for the prediction, if someone interested.

2 Likes

Please help…
I tried to run imdb notebook on latest fastai version but when I want to run learner.fit(lrs/2, 1, wds=wd, use_clr=(32,2), cycle_len=1) I get an error. But in mooc version which uses previous versions of fastai and pytorch it runs fine. There is a mismatch between weights shapes. I tried to debug it and find out what happens to the weights, but so far no luck. self._flat_weights contains a list of weights with different shape of [4600,1150] or [4600] but it does not contain [5290000, 1]. maybe somewhere it gets flatten. I don’t know what really happens, so please help me.

RuntimeError Traceback (most recent call last)
in
----> 1 learner.lr_find(start_lr=lrs/10, end_lr=lrs*10, linear=True)

~/Desktop/fastai-master/courses/dl2/fastai/learner.py in lr_find(self, start_lr, end_lr, wds, linear, **kwargs)
343 layer_opt = self.get_layer_opt(start_lr, wds)
344 self.sched = LR_Finder(layer_opt, len(self.data.trn_dl), end_lr, linear=linear)
–> 345 self.fit_gen(self.model, self.data, layer_opt, 1, **kwargs)
346 self.load(‘tmp’)
347

~/Desktop/fastai-master/courses/dl2/fastai/learner.py in fit_gen(self, model, data, layer_opt, n_cycle, cycle_len, cycle_mult, cycle_save_name, best_save_name, use_clr, use_clr_beta, metrics, callbacks, use_wd_sched, norm_wds, wds_sched_mult, use_swa, swa_start, swa_eval_freq, **kwargs)
247 metrics=metrics, callbacks=callbacks, reg_fn=self.reg_fn, clip=self.clip, fp16=self.fp16,
248 swa_model=self.swa_model if use_swa else None, swa_start=swa_start,
–> 249 swa_eval_freq=swa_eval_freq, **kwargs)
250
251 def get_layer_groups(self): return self.models.get_layer_groups()

~/Desktop/fastai-master/courses/dl2/fastai/model.py in fit(model, data, n_epochs, opt, crit, metrics, callbacks, stepper, swa_model, swa_start, swa_eval_freq, visualize, kwargs)
139 batch_num += 1
140 for cb in callbacks: cb.on_batch_begin()
–> 141 loss = model_stepper.step(V(x),V(y), epoch)
142 avg_loss = avg_loss * avg_mom + loss * (1-avg_mom)
143 debias_loss = avg_loss / (1 - avg_mom
batch_num)

~/Desktop/fastai-master/courses/dl2/fastai/model.py in step(self, xs, y, epoch)
48 def step(self, xs, y, epoch):
49 xtra = []
—> 50 output = self.m(*xs)
51 if isinstance(output,tuple): output,*xtra = output
52 if self.fp16: self.m.zero_grad()

~/.conda/envs/myroot36/lib/python3.6/site-packages/torch/nn/modules/module.py in call(self, *input, **kwargs)
475 result = self._slow_forward(*input, **kwargs)
476 else:
–> 477 result = self.forward(*input, **kwargs)
478 for hook in self._forward_hooks.values():
479 hook_result = hook(self, input, result)

~/.conda/envs/myroot36/lib/python3.6/site-packages/torch/nn/modules/container.py in forward(self, input)
90 def forward(self, input):
91 for module in self._modules.values():
—> 92 input = module(input)
93 return input
94

~/.conda/envs/myroot36/lib/python3.6/site-packages/torch/nn/modules/module.py in call(self, *input, **kwargs)
475 result = self._slow_forward(*input, **kwargs)
476 else:
–> 477 result = self.forward(*input, **kwargs)
478 for hook in self._forward_hooks.values():
479 hook_result = hook(self, input, result)

~/Desktop/fastai-master/courses/dl2/fastai/lm_rnn.py in forward(self, input)
104 with warnings.catch_warnings():
105 warnings.simplefilter(“ignore”)
–> 106 raw_output, new_h = rnn(raw_output, self.hidden[l])
107 new_hidden.append(new_h)
108 raw_outputs.append(raw_output)

~/.conda/envs/myroot36/lib/python3.6/site-packages/torch/nn/modules/module.py in call(self, *input, **kwargs)
475 result = self._slow_forward(*input, **kwargs)
476 else:
–> 477 result = self.forward(*input, **kwargs)
478 for hook in self._forward_hooks.values():
479 hook_result = hook(self, input, result)

~/Desktop/fastai-master/courses/dl2/fastai/rnn_reg.py in forward(self, *args)
122 “”"
123 self._setweights()
–> 124 return self.module.forward(*args)
125
126 class EmbeddingDropout(nn.Module):

~/.conda/envs/myroot36/lib/python3.6/site-packages/torch/nn/modules/rnn.py in forward(self, input, hx)
177 if batch_sizes is None:
178 result = _impl(input, hx, self._flat_weights, self.bias, self.num_layers,
–> 179 self.dropout, self.training, self.bidirectional, self.batch_first)
180 else:
181 result = _impl(input, batch_sizes, hx, self._flat_weights, self.bias,

RuntimeError: shape ‘[5290000, 1]’ is invalid for input of size 4600

1 Like

Thank you very much cahya - I really appreciate it!

can some one please tell me why training does not continue ?

as you can see from the photos, it does not increase from 2%…

thank you for the amazing lecture by the way… !!

Can someone please confirm which fastai and torch version to use in order to follow this tutorial and run the code ?
Or else, it would be better if there is an updated code in line with the latest fastai releases i.e. 1.0.12, 1.0.11 or so.

The jupyter notebooks for the Deep Learning courses 1 and 2 only work with fastai version 0.7. Follow the installation instructions here:

I’m observing the same thing with my training… My classifier is overfitting in the exact same manner as yours! converging to around 94.7% accuracy in epoch 3/4 and then overfitting upto a training loss of 0.06 by the 14th

only thing I changed from Jeremy’s solution was to use a batch size of 24 instead of 48.

Having error with Path()

DATA_PATH=Path(‘data/’)
DATA_PATH.mkdir(exist_ok=True)

NameError: name ‘Path’ is not defined

I think you need to import pathlib to be able to use Path

NameError Traceback (most recent call last)
in ()
2 import spacy
3 nlp = spacy.load(‘en’)
----> 4 tok_trn, trn_labels = get_all(df_trn, 1)
5 tok_val, val_labels = get_all(df_val, 1)

in get_all(df, n_lbls)
3 for i, r in enumerate(df):
4 print(i)
----> 5 tok_, labels_ = get_texts(r, n_lbls)
6 tok += tok_;
7 labels += labels_
Getting following error get_all
in get_texts(df, n_lbls)
5 texts = list(texts.apply(fixup).values)
6
----> 7 tok = Tokenizer().proc_all_mp(partition_by_cores(texts))
8 return tok, list(labels)

NameError: name ‘Tokenizer’ is not defined

I’m encountering the same error, with the same numbers (that is 5290000 and 4600) while attempting to train the language model with a very different dataset.

I think he was already running the notebook with 0.7…

All you have to do is make sure you do:
from fastai import *

everything you need will be imported automatically!

Can ULMFiT pre-trained model useful in text summarization? I’ve seen works in text classification. Any examples for summarization using this pre-trained model available? If so, point me there. Thanks.

seq2seq is not part of fastai yet (afaik).

Im getting this error when Im running the notebook of lession 10 => ‘Tokenizer’ object has no attribute ‘proc_all_mp’,
I’ve seen the code, there is no “proc_all_mp” is implemented?? is that code has been changed??
How can I solve this?? Please help

I was using latest version of fastai before, this got resolved by downgrading to fastai 0.7

I don’t understand 1st issue about why range is between n_lbls+1 and len(df.columns).
Did you manage to understand that? If you did, please tell me this reason.

seems like for i in range(n_lbls+1, len(df.columns)) is activated only when you have more than one text column.

Suppose if you have 4 labels,df[0] to df[3], followed by 3 text columns, df[4] to df[6]:

n_lbls=4
for i in range(n_lbls+1, len(df.columns)) will become for i in range(5,7), which will add df[5] and df[6] to text

In most cases where you have only 1 label followed by 1 text column, giving n_lbls=1
that range will be for i in range(2,2), which hence doesn’t add anything to text

1 Like

I don’t come up with the case where there are more than one label.
But, your reply helps me understand that this code can be applied for not just imdb, but other cases.
Thx!!

I am getting the same error “ValueError: not enough values to unpack (expected 2, got 1)”. Did you find any solution for this error?