Part 2 Lesson 10 wiki


(Jeremy Howard) #491

The Stepper for RNNs handles calling reset for you.


(Igor Kasianenko) #492

Hi,
I’m running latest version of notebook from Lesson resources on my Windows laptop with 960m GPU, that is not supported by pytorch 0.3, so using latest conda env with pytorch 0.4.0.

I’m running into same problem as NotImplementedError: During the fitting of Language Model in rnn_reg.py
learner.fit(lrs/2, 1, wds=wd, use_clr=(32,2), cycle_len=1)

debugging proves that there is no Embedding in fastai\lib\site-packages\torch\nn\backends\backend.py when calling self.function_classes.get(name) where name is 'Embedding'.

I wonder is there a path forward to solve this? Should I open this kind of issue in pytorch github?

P.S. Is it OK not to have backward compatability for pytorch? E.g. functions that worked in 0.3 don’t work in 0.4


(Jeremy Howard) #493

You’ll need to git pull to get the latest version which fixes this.


(Igor Kasianenko) #494

It fixed that error. I was running out of memory on 4GB GPU, so I decreased bs=16, and now there is another error running learner.fit

RuntimeError                              Traceback (most recent call last)
<ipython-input-20-b544778ca021> in <module>()
----> 1 learner.fit(lrs/2, 1, wds=wd, use_clr=(32,2), cycle_len=1)

C:\Users\Developer\fastai\courses\dl2\fastai\learner.py in fit(self, lrs, n_cycle, wds, **kwargs)
    285         self.sched = None
    286         layer_opt = self.get_layer_opt(lrs, wds)
--> 287         return self.fit_gen(self.model, self.data, layer_opt, n_cycle, **kwargs)
    288 
    289     def warm_up(self, lr, wds=None):

C:\Users\Developer\fastai\courses\dl2\fastai\learner.py in fit_gen(self, model, data, layer_opt, n_cycle, cycle_len, cycle_mult, cycle_save_name, best_save_name, use_clr, use_clr_beta, metrics, callbacks, use_wd_sched, norm_wds, wds_sched_mult, use_swa, swa_start, swa_eval_freq, **kwargs)
    232             metrics=metrics, callbacks=callbacks, reg_fn=self.reg_fn, clip=self.clip, fp16=self.fp16,
    233             swa_model=self.swa_model if use_swa else None, swa_start=swa_start,
--> 234             swa_eval_freq=swa_eval_freq, **kwargs)
    235 
    236     def get_layer_groups(self): return self.models.get_layer_groups()

C:\Users\Developer\fastai\courses\dl2\fastai\model.py in fit(model, data, n_epochs, opt, crit, metrics, callbacks, stepper, swa_model, swa_start, swa_eval_freq, **kwargs)
    130             batch_num += 1
    131             for cb in callbacks: cb.on_batch_begin()
--> 132             loss = model_stepper.step(V(x),V(y), epoch)
    133             avg_loss = avg_loss * avg_mom + loss * (1-avg_mom)
    134             debias_loss = avg_loss / (1 - avg_mom**batch_num)

C:\Users\Developer\fastai\courses\dl2\fastai\model.py in step(self, xs, y, epoch)
     55         if self.loss_scale != 1: assert(self.fp16); loss = loss*self.loss_scale
     56         if self.reg_fn: loss = self.reg_fn(output, xtra, raw_loss)
---> 57         loss.backward()
     58         if self.fp16: update_fp32_grads(self.fp32_params, self.m)
     59         if self.loss_scale != 1:

C:\Users\Developer\Anaconda3\envs\fastai\lib\site-packages\torch\tensor.py in backward(self, gradient, retain_graph, create_graph)
     91                 products. Defaults to ``False``.
     92         """
---> 93         torch.autograd.backward(self, gradient, retain_graph, create_graph)
     94 
     95     def register_hook(self, hook):

C:\Users\Developer\Anaconda3\envs\fastai\lib\site-packages\torch\autograd\__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables)
     87     Variable._execution_engine.run_backward(
     88         tensors, grad_tensors, retain_graph, create_graph,
---> 89         allow_unreachable=True)  # allow_unreachable flag
     90 
     91 

RuntimeError: inconsistent range for TensorList output

(Stefan) #495

@jeremy Is there a documentation available of the exact preprocessing and training steps for the language model (WikiText-103). I looked at the Hindi repository, but I’m not sure, if the “t_up” trick is used there?

So it would be great to have a kind of reference implementation/documentation for the WikiText-103 language model :slight_smile:


(Jeremy Howard) #496

This is the script we used: https://github.com/fastai/fastai/blob/master/courses/dl2/imdb_scripts/train_tri_wt.py


#497

Interesting, this comes with the freezing of layers. The model trains without error if you unfreeze it completely, I don’t know where this one comes from.
I’ll try to look into it.


#498

Hi all. I’m using Google Colaboratory and I’m getting an AttributeError when trying to install scipy sparse - module ‘scipy’ has no attribute ‘sparse’. I’ve searched the forums and notice others having the same issue.

Not a fast.ai problem but, just wondering if anyone has found a patch…

I have installed scipy-1.1.0 and also imported it directly using - from scipy import sparse as sp

Any help would be greatly appreciated!


AttributeError Traceback (most recent call last)
in ()
----> 1 from fastai.text import *
2 import html

/usr/local/lib/python3.6/dist-packages/fastai/text.py in ()
----> 1 from .core import *
2 from .learner import *
3 from .lm_rnn import *
4 from torch.utils.data.sampler import Sampler
5 import spacy

/usr/local/lib/python3.6/dist-packages/fastai/core.py in ()
----> 1 from .imports import *
2 from .torch_imports import *
3
4 def sum_geom(a,r,n): return an if r==1 else math.ceil(a(1-r**n)/(1-r))
5

/usr/local/lib/python3.6/dist-packages/fastai/imports.py in ()
3 import pandas as pd, pickle, sys, itertools, string, sys, re, datetime, time, shutil, copy
4 import seaborn as sns, matplotlib
----> 5 import IPython, graphviz, sklearn_pandas, sklearn, warnings, pdb
6 import contextlib
7 from abc import abstractmethod

/usr/local/lib/python3.6/dist-packages/sklearn_pandas/init.py in ()
1 version = ‘1.6.0’
2
----> 3 from .dataframe_mapper import DataFrameMapper # NOQA
4 from .cross_validation import cross_val_score, GridSearchCV, RandomizedSearchCV # NOQA
5 from .categorical_imputer import CategoricalImputer # NOQA

/usr/local/lib/python3.6/dist-packages/sklearn_pandas/dataframe_mapper.py in ()
5 import numpy as np
6 from scipy import sparse
----> 7 from sklearn.base import BaseEstimator, TransformerMixin
8
9 from .cross_validation import DataWrapper

/usr/local/lib/python3.6/dist-packages/sklearn/init.py in ()
132 else:
133 from . import __check_build
–> 134 from .base import clone
135 __check_build # avoid flakes unused variable error
136

/usr/local/lib/python3.6/dist-packages/sklearn/base.py in ()
11 from scipy import sparse
12 from .externals import six
—> 13 from .utils.fixes import signature
14 from . import version
15

/usr/local/lib/python3.6/dist-packages/sklearn/utils/init.py in ()
9
10 from .murmurhash import murmurhash3_32
—> 11 from .validation import (as_float_array,
12 assert_all_finite,
13 check_random_state, column_or_1d, check_array,

/usr/local/lib/python3.6/dist-packages/sklearn/utils/validation.py in ()
13
14 import numpy as np
—> 15 import scipy.sparse as sp
16
17 from …externals import six

AttributeError: module ‘scipy’ has no attribute ‘sparse’


(Erik Chan) #499

Is it me or the files are no longer existing at files.fast.ai?


(Emil) #500

To me, the files are still there.


(Erik Chan) #501

Thanks they’re back online now


(WG) #502

Is there a definitive answer to this question anywhere?

Based on the ULMFiT paper, the recommendation is to fine-tune “only the last layer” (section 3.2) before unfreezing and applying discriminative learning rates to the other layers. As such, why is there the line learner.unfreeze() immediately before fitting the model begins? It seems that it should be learner.freeze(-1) unless I’m missing something (which is typically the case :slight_smile: )


(Even Oldridge) #503

Hey Christine, I’m starting to take a look at something similar and I’m curious what your results were here?


(adrian) #504

I thought it was a typo but didn’t get round to reporting it, i do the same with just unfreezing final layer


(Erik Chan) #505

Did you have any luck with this? Can you not simply add more classes?


(Christine) #506

Hi - Unfortunately I wasn’t able to get better results with the different losses. (I got everything up and running, but the accuracy was always less good.) I’m sure I didn’t exhaust every possibility though, so let me know if you have better success than I did!


(Gavin Francis) #507

Reading Universal Language Model Fine-tuning for Text Classification
Not sure if anyone has commented on this before, but in @jeremy and Sebastian’s excellent paper, there seems to be an error in the STLR formula (3) for p when t>cut. In order to match the figure, it should be something like (T-t)/(T-cut)


(gram) #509

I understand why this is much better than word embeddings but with this model is there a way to use the words (in the context of the corpus you fed the model) like you would with word embeddings?
For example: the King - man + woman = queen equation?
What exactly does implementing the transfer learned, trained model look like? What can it do besides classifying?
I know chatbots use LSTMs. Maybe someone can point me in the direction to how this model would work in a chatbot? How does one glom meaning from new text run through the trained model? How it would work with translation? How it works with a GAN? How it works with a search? How it works as a Q and A? This would help me understand what exactly is happening with the whole process. Please?

(If someone just wants to give an answer to one of these that’d be appreciated. I know no one person will answer all of this.)


(Thomas) #510

It’s a bit handwavy, but I thought about this a bit (actually the second half).
One thing the LM does over word vectors is keep quite a bit more context - as we are looking at LSTM states.
As such I would expect the equivalent of King - Man + Woman = Queen to be a relatively poor use of such a model.
For chatbots, QA, MT, I think using the encoder (or in the latter encoder + decoder) will be beneficial, as for those the history is natural.


(gram) #511

Thanks for the answer. I’d like to make a Q and A ‘bot’ with this but I feel I don’t know which direction to go to learn how to make one.
Maybe I can just take a chatbot using word embeddings and modify it to use this lesson’s model instead? I’m too new at this. Sometimes the steps to progress are too high to climb.