Lesson 4 In-Class Discussion ✅

Fantastic! Thanks a lot!

I tried it out, but I’m getting an error that tells me FloatList doesn’t exist

---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
<ipython-input-62-f7b4e1430f64> in <module>()
      3 data = (TabularList.from_df(concat_clean, path=path, cat_names=cat_vars, cont_names=cont_vars, procs=tfms)
      4                    .split_by_idx(val_idx)
----> 5                    .label_from_df(cols=dep_var, label_cls=FloatList)
      6                    .databunch(bs = bs))

NameError: name 'FloatList' is not defined

That means you don’t have the latest version of fastai (possibly you need the dev version).

On GCP, I think one needs to use K80 GPU. Doesn’t run on P4.
So 11 GB looks to be minimum GPU RAM needed to run the language model.

It’s usually the case where the pot of words in scope is everything that appears more than once. But then in NLP you’ll limit the size of your dictionary. I think he said it was 60k limit in this case. So effectively that’s the top 60k of your original selection. How it works will depend a lot on the size of your corpus. To be honest, I’m surprised he went with 2 words. There’s not a lot you can learn about something that only appears twice.

Try this

I’m getting this error while running learning rate finder

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-21-d81c6bd29d71> in <module>
----> 1 learn.lr_find()

/opt/anaconda3/lib/python3.6/site-packages/fastai/train.py in lr_find(learn, start_lr, end_lr, num_it, stop_div, **kwargs)
     26     cb = LRFinder(learn, start_lr, end_lr, num_it, stop_div)
     27     a = int(np.ceil(num_it/len(learn.data.train_dl)))
---> 28     learn.fit(a, start_lr, callbacks=[cb], **kwargs)
     29 
     30 def to_fp16(learn:Learner, loss_scale:float=512., flat_master:bool=False)->Learner:

/opt/anaconda3/lib/python3.6/site-packages/fastai/basic_train.py in fit(self, epochs, lr, wd, callbacks)
    160         callbacks = [cb(self) for cb in self.callback_fns] + listify(callbacks)
    161         fit(epochs, self.model, self.loss_func, opt=self.opt, data=self.data, metrics=self.metrics,
--> 162             callbacks=self.callbacks+callbacks)
    163 
    164     def create_opt(self, lr:Floats, wd:Floats=0.)->None:

/opt/anaconda3/lib/python3.6/site-packages/fastai/basic_train.py in fit(epochs, model, loss_func, opt, data, callbacks, metrics)
     92     except Exception as e:
     93         exception = e
---> 94         raise e
     95     finally: cb_handler.on_train_end(exception)
     96 

/opt/anaconda3/lib/python3.6/site-packages/fastai/basic_train.py in fit(epochs, model, loss_func, opt, data, callbacks, metrics)
     82             for xb,yb in progress_bar(data.train_dl, parent=pbar):
     83                 xb, yb = cb_handler.on_batch_begin(xb, yb)
---> 84                 loss = loss_batch(model, xb, yb, loss_func, opt, cb_handler)
     85                 if cb_handler.on_batch_end(loss): break
     86 

/opt/anaconda3/lib/python3.6/site-packages/fastai/basic_train.py in loss_batch(model, xb, yb, loss_func, opt, cb_handler)
     20 
     21     if not loss_func: return to_detach(out), yb[0].detach()
---> 22     loss = loss_func(out, *yb)
     23 
     24     if opt is not None:

/opt/anaconda3/lib/python3.6/site-packages/torch/nn/functional.py in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction)
   1665     if size_average is not None or reduce is not None:
   1666         reduction = _Reduction.legacy_get_string(size_average, reduce)
-> 1667     return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
   1668 
   1669 

/opt/anaconda3/lib/python3.6/site-packages/torch/nn/functional.py in nll_loss(input, target, weight, size_average, ignore_index, reduce, reduction)
   1520     if input.size(0) != target.size(0):
   1521         raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).'
-> 1522                          .format(input.size(0), target.size(0)))
   1523     if dim == 2:
   1524         return torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index)

ValueError: Expected input batch_size (10080) to match target batch_size (32).

Broadly, the answer is Yes.

Yes. Correct. Great recent Twitter Blog post on this - https://blog.twitter.com/engineering/en_us/topics/insights/2018/embeddingsattwitter.html

1 Like

You could try averaging the same.

Timestamp categorical variable converted to its entity embeddings would help. add_datepart function in FastAI gives you rich additional columns for the same. As regards the movie genres and other meta data, in general, the more data the better would be the model predictions. Also, entity embeddings can be applied to these too.

1 Like

I am working on an ad click prediction dataset. How to get started and is there any reference material that I can follow.

Just a heads-up: looks like the edited video Jeremy just posted doesn’t work - I get errors in both Chrome and Firefox.

Oops sorry that’s in Lesson 3!

I am trying to build a classifier for the SQL queries in our database, based on query text and runtime. I am hoping I can leverage the transfer learning technique, where I build a learner and re-train it for different environments. However, one question I have is that query runtime is not fixed forever, because DBAs can tune their queries and thus the runtime changes. I am wondering if that happens, what can I do to the learner I have built, using the old runtime data? Does that mean I need to retrain the entire learner? Is there a way to “refresh” it such that only the “outdated” data is removed?

Probably easiest would be for you to show an example of some code you had before with a custom dataset, and we can show how we’d suggest doing it now. If anything turns out less easy, then we’ll make sure we fix things to make it at least as easy :slight_smile:

Yeah sorry we didn’t get in to discriminative learning this lesson - will do next one.

2 Likes

I suspect regression should work fine now, although I haven’t tried it for NLP.

1 Like

I don’t think it almost ever makes sense to start from scratch. Old English and Modern English have some similarities, so pre-training should help.

1 Like

Well spotted. The reason is that they’re different. You can label with multiple columns, so it’s called cols, but you can only split on one column, so it’s called col. :slight_smile:

1 Like

I don’t think changing bs after creation will make any difference, since the dataloaders are already created by then. Try passing that to load as a param.

And try looking at a batch of your data (e.g. with data.train_dl.one_batch()) and check the shape, to make sure you have the size you expected.

2 Likes

Has anyone encountered this issue, or does anyone know how to resolve it?

In the lesson3-imdb notebook, the exception

NameError: name ‘TextFilesList’ is not defined

is returned by the command

data_lm = (TextFilesList.from_folder(path) .filter_by_folder(include=['train', 'test']) .random_split_by_pct(0.1) .label_for_lm() .databunch())

I can find no documentation for a TextFilesList object in fastai.

Note: I have refreshed the repo as recommended before running the notebook:
git pull in the /notebooks/course-v3 folder, and
pip install fastai --upgrade