Is fastai_v1 broken for the latest release 1.0.21 (and latest pytorch)

Update I switched to an older nightly pytorch version (10/25) and I still get the same error

I can’t get the text (imdb) example to run?

Is anyone else having issues with the latest fastai_v1 and nightly pytorch build?

Code:

from fastai import *
from fastai.text import *

path = untar_data(URLs.IMDB_SAMPLE)
path.ls()
df = pd.read_csv(path/‘texts.csv’)
df.head()

data_lm = TextDataBunch.from_csv(path, ‘texts.csv’)
learn = language_model_learner(data_lm, drop_mult=0.3, pretrained_model=URLs.WT103)
learn.fit_one_cycle(1, 1e-2, moms=(0.8,0.7))

Here is my error:

Downloading http://files.fast.ai/data/examples/imdb_sample
epoch train_loss valid_loss accuracy
Traceback (most recent call last):
File “text_example.py”, line 14, in
learn.fit_one_cycle(1, 1e-2, moms=(0.8,0.7))
File “/home/ubuntu/anaconda3/lib/python3.6/site-packages/fastai/train.py”, line 22, in fit_one_cycle
learn.fit(cyc_len, max_lr, wd=wd, callbacks=callbacks)
File “/home/ubuntu/anaconda3/lib/python3.6/site-packages/fastai/basic_train.py”, line 162, in fit
callbacks=self.callbacks+callbacks)
File “/home/ubuntu/anaconda3/lib/python3.6/site-packages/fastai/basic_train.py”, line 94, in fit
raise e
File “/home/ubuntu/anaconda3/lib/python3.6/site-packages/fastai/basic_train.py”, line 84, in fit
loss = loss_batch(model, xb, yb, loss_func, opt, cb_handler)
File “/home/ubuntu/anaconda3/lib/python3.6/site-packages/fastai/basic_train.py”, line 18, in loss_batch
out = model(*xb)
File “/home/ubuntu/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py”, line 479, in call
result = self.forward(*input, **kwargs)
TypeError: forward() takes 2 positional arguments but 73 were given

My environment (aws)

ubuntu@ip-172-31-19-203:~/milestone$ python -c ‘import fastai; fastai.show_install(1)’

=== Software === 
python version  : 3.6.5
fastai version  : 1.0.21
torch version   : 1.0.0.dev20181025
nvidia driver   : 396.44
torch cuda ver  : 9.2.148
torch cuda is   : available
torch cudnn ver : 7104
torch cudnn is  : enabled

=== Hardware === 
nvidia gpus     : 1
torch available : 1
  - gpu0        : 16160MB | Tesla V100-SXM2-16GB

=== Environment === 
platform        : Linux-4.4.0-1070-aws-x86_64-with-debian-stretch-sid
distro          : #80-Ubuntu SMP Thu Oct 4 13:56:07 UTC 2018
conda env       : Unknown
python          : /home/ubuntu/anaconda3/bin/python
sys.path        : 
/home/ubuntu/src/cntk/bindings/python
/home/ubuntu/anaconda3/lib/python36.zip
/home/ubuntu/anaconda3/lib/python3.6
/home/ubuntu/anaconda3/lib/python3.6/lib-dynload
/home/ubuntu/anaconda3/lib/python3.6/site-packages
/home/ubuntu/anaconda3/lib/python3.6/site-packages/IPython/extensions

Fri Nov  9 05:29:43 2018       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 396.44                 Driver Version: 396.44                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  Tesla V100-SXM2...  On   | 00000000:00:1E.0 Off |                    0 |
| N/A   29C    P0    23W / 300W |     10MiB / 16160MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

1 Like

I just ran the text notebook in examples without any problem. Does this one run for you?

Yes, I can run the examples notebook. I was trying to run code (imdb) from the Deep Learning V3 course. Are those notebooks compatible with version 1.0.21 of fastai?

I am facing another problem with the same example. The is running, but the accuracy is pretty low.

image

The imdb notebook in course v3 should run normally.

I also noticed a large drop in classifier accuracy from 0.7 to 0.3 when upgrading fast.ai from 1.0.18 to 1.0.22. If this is a bug, I wonder if the current test suite is robust to catch this type of accuracy bug.

Datasets such as IMDB samples are not version-matched to fastai library versions, so this can lead to subtle bug/confusion.

1 Like

Coming back to this after 3 weeks: after updating to 1.0.33, accuracy is back up to the 0.7 range. I’m curious to know that happened so future regression can be detected/avoided.

@Esteban

have you solved that eventually? trying to get a version of this notebook to run:https://github.com/fastai/fastai/blob/master/courses/dl2/translate.ipynb

Getting this error here on calling below function from notebook lesson 11 last part II

TypeError: forward() takes 2 positional arguments but 3 were given

Using latest fast.ai lib but this is an old notebook. The error pops up on running cell:

learn.lr_find()

Apparently, the new fast.ai li calls forward in below with 3 functions. Supposedly, I need to adjust that function here in the notebook?

Comments welcome

@Sylvain

class Seq2SeqRNN(nn.Module):
def init(self, vecs_enc, itos_enc, em_sz_enc, vecs_dec, itos_dec, em_sz_dec, nh, out_sl, nl=2):
super().init()
self.nl,self.nh,self.out_sl = nl,nh,out_sl
## create encode embedding
self.emb_enc = create_emb(vecs_enc, itos_enc, em_sz_enc)
## add dropout
self.emb_enc_drop = nn.Dropout(0.15)
## create the RNN: em_sz_enc = size of embedding, nh = our choice (56 for now),
## num_layers: how many layers do we want, some dropout inside the RNN
self.gru_enc = nn.GRU(em_sz_enc, nh, num_layers=nl, dropout=0.25) ## standard pytorch, you could use LSTM too
## some output to fit the decoder, so lets use a linear layer
self.out_enc = nn.Linear(nh, em_sz_dec, bias=False) ## nh: number of hidden into the decoder embedding size

    self.emb_dec = create_emb(vecs_dec, itos_dec, em_sz_dec)
    self.gru_dec = nn.GRU(em_sz_dec, em_sz_dec, num_layers=nl, dropout=0.1) ## or take LSTM
    self.out_drop = nn.Dropout(0.35)
    self.out = nn.Linear(em_sz_dec, len(itos_dec))
    self.out.weight.data = self.emb_dec.weight.data
## forward pass
def forward(self, inp):
    sl,bs = inp.size()
    h = self.initHidden(bs)
    emb = self.emb_enc_drop(self.emb_enc(inp))
    enc_out, h = self.gru_enc(emb, h)
    dec_inp = torch.zeros(bs).long() ## _bos_ for 1st run
    res = []
    for i in range(self.out_sl): ## output sequence length (see constructor) 
        emb = self.emb_dec(dec_inp).unsqueeze(0) 
        outp, h = self.gru_dec(emb, h) 
        outp = self.out(self.out_drop(outp[0]))
        res.append(outp) 
        dec_inp = V(outp.data.max(1)[1]) ##1 is word index of largest things
        if (dec_inp==1).all(): break
    return torch.stack(res)

def initHidden(self, bs): return V(torch.zeros(self.nl, bs, self.nh))
def reset(self): torch.zeros(self.nl, self.nh,  self.out_sl)   ## self.out_sl, ...enc