Developer chat

I would like to commit a bug fix to the WideResNet class where the final layers (Batchnorm and Linear layers) have a hard coded input channel number equal to the 4th value in n_channels. That will result in an error unless the num_groups is 3. Also I would like to add a new default input channel number for more flexibility.

Yes, thank you for the heads up, @butchland. Indeed, there was a mistake in the auto-export list - while a deprecated back compatibility create_cnn was still there. Apologies about that. It has been fixed in 1.0.48.

Hello, I think there is an issue with drops values in awd_lstm_lm_config

Values used to be (https://github.com/fastai/fastai/blob/release-1.0.42/fastai/text/learner.py#L16)

  • [0.25, 0.1, 0.2, 0.02, 0.15]
    • [input_p=dps[0], output_p=dps[1],weight_p=dps[2], embed_p=dps[3], hidden_p=dps[4]]

But now are (https://github.com/fastai/fastai/blob/release-1.0.43/fastai/text/models/awd_lstm.py#L194)

  • output_p=0.25, hidden_p=0.1, input_p=0.2, embed_p=0.02, weight_p=0.15

Values should be

  • input_p=0.25, output_p=0.1, weight_p=0.2, embed_p=0.02, hidden_p=0.15
1 Like

I’ve added in https://github.com/fastai/fastai/issues/1809

Thanks, it’s fixed now.

3 posts were split to a new topic: Dev Install problem on Paperspace

What are your thoughts on adding a new doc page to explain fastai.imports? I’m guessing that familiarizing myself with the imports and abbreviations that the fastai authors find useful will make me more productive, but there is not a quick way to see what purpose each import serves. I could write a doc page with a table that lists each import, a brief description, and a link to the homepage for the import if you’d entertain such a pull request.

1 Like

Thank you @jpizarrom for pointing that out it is very good catch.

Maybe it explains why increasing the dropmult had inverse effects on regularisation, in my experiments as the output dropout was increased more than 2 fold

with bug: input_p=0.2 , output_p=0.25, weight_p=0.15, embed_p=0.02, hidden_p=0.1,
correct : input_p=0.25, output_p=0.1,  weight_p=0.2,  embed_p=0.02, hidden_p=0.15
          0.5         , +1.5       ,   -0.5       ,      0       ,      -0.05

I wonder how have you noticed the change?

Hello,

I’ve started to read about the model two weeks ago in blogs and also reading code of some use cases in github.

When I was trying to use ULMFIT to finetune with my own dataset, I was always overfitting(kind of) the language model and the text classifier, always getting lower training loss than validation loss. I was trying with drop_mult in the range 0.3 to 0.7 because that was the common values.

After lots of trial and fail, and because I realice the api changed recently, I decided to review the ULMFIT implementation and recents changes.

1 Like

Hi @stas

in order me to use suggested : from fastai.callbacks.mem import PeakMemMetric i had to update fastai library which somehow messed my whole python environment that I have been dealing with it fix it for almost two working days. Anyhow after every thing seems to be alright I got this error when trying to train a model

create_cnnis deprecated and is now namedcnn_learner`.

when switched it to “cnn_learner” and running it that jupyter_notebook cell remains busy forever without actually constructing a model and training it. any solution for this?

thanks for your help :slight_smile:

this is the code I am using :

arch = models.resnet18
aName = '_resNet18_test'
epochS = 10
maxLR=1e-02

data = ImageDataBunch.from_folder(f'{d_path}'+"LC_B_5", ds_tfms=(tfms, []),valid ='test', bs=8)
mName_S = str('bestModel_' +str(5)+'_S_'+ aName)    
learnS = cnn_learner(data, arch,pretrained=True,
                    metrics=[accuracy, error_rate], 
                    callback_fns=[partial(CSVLogger, filename =str('stat_' +str(tr)+'_S_'+ aName))
                    , ShowGraph,PeakMemMetric,              
                    partial(SaveModelCallback, monitor ='val_loss', mode ='auto',name = mName_S ), 
                    partial(ReduceLROnPlateauCallback, monitor='val_loss', min_delta=0.01, patience=3),
                    partial(EarlyStoppingCallback, monitor='val_loss', min_delta=0.01, patience=5) ])
                                                              
learnS.fit_one_cycle(epochS, max_lr=maxLR, moms =[0.95, 0.85], div_factor = 25.0)

Perhaps some of the callbacks you use weren’t correctly ported to the new API, try to disable them one a time and see if you find the culprit, then post which one gives you the trouble.

I thought of it too and removed all the callbacks but still the same.

If there are no callbacks, it shouldn’t be a problem, cnn_create was just renamed to cnn_learner. Perhaps something is messed up in your python setup as you were saying?

Try a fresh conda env, with nothing in it and just conda install -c pytorch -c fastai fastai (and jupyter)?

Thanks @stas. I will try your suggestion and will let you know you how did it go.

but last time unless i would use :

pip install git+https://github.com/fastai/fastai.git

i would get module not fount error for :
from fastai.callbacks.mem import PeakMemMetric

You probably had a very old fastai, when you install the last released fastai 1.0.48, it’s there: https://github.com/fastai/fastai/blob/1.0.48/fastai/callbacks/mem.py

sadly creating new conda environment also didn’t work. perhaps I have to have a software engineer take a look at my system as it seems that one thing get fix and others go wrong despite the fact that I have removed all python and conda environment and dependencies and re-installed them again yesterday to have everything up to date.

re fastai version, i just installed the latest version yesterday which I suppose it is the latest version.

image

Thanks for the help anyway. I will be in touch if I did any progress in it.

sadly creating new conda environment also didn’t work.

It’s not possible for us to help unless you say specifically what doesn’t work

Are you saying that cnn_learner is still stuck? What’s your full environment, see: https://docs.fast.ai/support.html?

and it looks like there is a bug in pytorch on windows, which now has a workaround in fastai-1.0.49 - just released. So update and try again please.

6 posts were merged into an existing topic: Custom ItemList, getting ForkingPickler broken pipe

I’ve been looking closely at parallel (fastai.core). It takes a function (func) and a collection (arr) and calls func on each element of arr (in parallel).

Does anyone here know why forces the function you provide func to accept both the value and index of each element in arr? This means you have to write a new function that is a copy of your old one, but accepts an additional input index that it never uses. In the source code it calls

ProcessPoolExecutor(max_workers=max_workers).submit()

which I looked up here: Python 3 Library and it doesn’t seem to use the index argument.

Could parallel possibly be reworked to drop the index argument?

1 Like

No, we need the index argument for its use in verify_images. Adding it your function and ignoring it shouldn’t be too painful.

1 Like