Part 2 lesson 11 wiki


(Pascal Guedon) #305

Solution found here : https://github.com/facebookresearch/fastText/issues/411
for everyone having problems when compiling fasttext under Windows like @KarlH , @Chris_Palmer and me.

I can’t believe that it’s not already updated (since january) as it’s only a missing #include.
To summarize :

  • git clone https://github.com/facebookresearch/fastText.git
  • modify fasttext/src/productquantizer.cc and add : #include <string>
  • open an anaconda prompt, access the fastai env, then go to the fastText sources directory and run : pip install .

(魏璎珞) #306

Are my two pickle.dump files useable?


(Pascal Guedon) #307

Yes, it worked well ! Thanks again.
I’ve used them to continue with the notebook before I found a solution to compile fasttext. And anyway it’s a lot smaller than the fasttext files for wiki models in EN and FR (that I even not finished to download yet).


(Chris Palmer) #308

Thanks for this @pascal !

I noticed that the readme suggests that you use the latest stable release:

Did you just use the master branch with no issues?


(Pascal Guedon) #309

Hi @Chris_Palmer. You can see in commits that there are only few commits, some fix and updates on docs so no new features. IMO you can definitely use the master branch. I do not get any issue with it for now.


#310

Hi, I’m trying to increase the number of layers from 2 to 3 in the final model “Seq2SeqRNN_All” to make it more expressive. I removed the nl=2 hardcoding in init().

However, when I try and run .fit() I get the following error. Does anyone know where my dimension mismatch is coming from and what I should change? As a new user I can only attach 1 image, but I added as much of the error trace as possible. Thank you!


(Desislava Petkova) #311

The line

h = h.view(2,2,bs,-1).permute(0,2,1,3).contiguous().view(2,bs,-1)

concatenates the forward and backward RNN.

In this case the bidirectional RNN has 2 layers, so the line concatenates correctly. But if num_layers != 2, then should the line be modified to something like:

# Concatenate forward and backward RNN
# h.size() = [num_layers, 2, batch_size, num_hidden]
# --->
# h.size() = [num_layers, batch_size, 2 * num_hidden]
h = h.view(self.nl,2,bs,-1).permute(0,2,1,3).contiguous().view(self.nl,bs,-1)

(Leslie Chiang) #312

Hi, I am kind of late here but I wonder if anyone try download the full imagenet data to run devise section? The val folder seems different compared to the jupyter notebook, instead it now contains just the JPEG files with xml annotation in another folder. Just thought the codes below would be useful to someone for grabbing the fast text word vector using the synset to word vector ( syn2wv ).

import xml.etree.ElementTree as ET

images = []
img_vecs = []

n_trn = 0
for d in (PATH/'ILSVRC/Data/CLS-LOC/train').iterdir():
    if d.name not in syn2wv: continue
    vec = syn2wv[d.name]
    for f in d.iterdir():
        images.append(str(f.relative_to(PATH)))
        img_vecs.append(vec)
        n_trn +=1

n_val=0
for d in (PATH/'ILSVRC/Data/CLS-LOC/val/').iterdir():
    vname = d.name.split('.')[0]
    extract = ET.parse(os.path.join(PATH/'ILSVRC/Annotations/CLS-LOC/val/',vname +'.xml'))
    dname = extract.getroot()[-1][0].text   # object-name
    #print(vname, dname)
    if dname not in syn2wv: continue
    #print('OK', dname)
    vec = syn2wv[dname]
    images.append(str(d.relative_to(PATH)))
    img_vecs.append(vec)
    n_val += 1

n_trn, n_val
(739526, 28700)


(Faisal Ilaiwi) #313

was the topic of stacked RNN covered before? Jermey mentioned that in this session but I relooked at lesson 6 and lesson 7 of part one and I cannot seem to find any difference about different structures RNN cells are put together. Am I missing something?


(shao) #314

Regarding the devise notebook, can we use parallel GPUs?
I tried

models = ConvnetBuilder(arch, md.c, is_multi=False, is_reg=True, xtra_fc=[1024], ps=[0.2,0.2])
models = nn.DataParallel(models, device_ids=[0, 1, 2, 3])
learn = ConvLearner(md, models, precompute=True)
learn.opt_fn = partial(optim.Adam, betas=(0.9,0.99))

But i got an error

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-59-ee48c5dc5c68> in <module>()
----> 1 learn = ConvLearner(md, models, precompute=True)
      2 #learn = ConvLearner.from_model_data(md, models, precompute=True)
      3 learn.opt_fn = partial(optim.Adam, betas=(0.9,0.99))

/opt/conda/lib/python3.6/site-packages/fastai/conv_learner.py in __init__(self, data, models, precompute, **kwargs)
     98         if hasattr(data, 'is_multi') and not data.is_reg and self.metrics is None:
     99             self.metrics = [accuracy_thresh(0.5)] if self.data.is_multi else [accuracy]
--> 100         if precompute: self.save_fc1()
    101         self.freeze()
    102         self.precompute = precompute

/opt/conda/lib/python3.6/site-packages/fastai/conv_learner.py in save_fc1(self)
    162 
    163     def save_fc1(self):
--> 164         self.get_activations()
    165         act, val_act, test_act = self.activations
    166         m=self.models.top_model

/opt/conda/lib/python3.6/site-packages/fastai/conv_learner.py in get_activations(self, force)
    153 
    154     def get_activations(self, force=False):
--> 155         tmpl = f'_{self.models.name}_{self.data.sz}.bc'
    156         # TODO: Somehow check that directory names haven't changed (e.g. added test set)
    157         names = [os.path.join(self.tmp_path, p+tmpl) for p in ('x_act', 'x_act_val', 'x_act_test')]

/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py in __getattr__(self, name)
    396                 return modules[name]
    397         raise AttributeError("'{}' object has no attribute '{}'".format(
--> 398             type(self).__name__, name))
    399 
    400     def __setattr__(self, name, value):

AttributeError: 'DataParallel' object has no attribute 'name'

(Bruno Seznec) #315

Naive question : does it exist a smaller Imagenet Dataset compatible with devise notebook ?
Thanks

Sorry found this
https://tiny-imagenet.herokuapp.com/
in the previous post


(zipp) #316

Anyone tried to fit a LM for the seq2seq?
Seems a very interesting area and I wonder if there is any paper or anyone that tried it


(魏璎珞) #317

Thank you for this amazing post, @phaniteja. Thank you too for the equally amazing reply, @stemill. I benefited a lot, but only after reading them several times :joy:


(魏璎珞) #318

Would you be kind enough to make your notebook available? I’m would like to use it to troubleshoot my crappy translation


(魏璎珞) #320

I happen to be scrutinizing that notebook . Seems like you are asking about the section on teacher-forcing, but I’m not sure what are you asking :thinking:?


(Rohit Gupta) #321

Didn’t saw this part

if (y is not None) and (random.random()<self.pr_force):
    if i>=len(y): break
    dec_inp = y[i]

My bad, actually I was trying to create a similar network with keras using attention mechanism. Due to static graphs, I wasn’t able to create one with teacher-forcing method.


(魏璎珞) #322

i believe jeremy said during the video lecture that he switched to pytorch precisely because it was difficult to implement teacher forcing on tensorflow, so perhaps don’t try too hard on this