(Amrit ) #1

Hi @jeremy,

In the download url brings up a 404 for downloading the nasnet pretrained model.

pretrained_settings = {
    'nasnetalarge': {
        'imagenet': {
            'url': '',

There was an update made and the new url is:

 'url': '',

(Florian Peter) #2

Nice catch!

However, when I update the URL in, I get the following error:


(Amrit ) #3

Hey @farlion not at a computer but looking at the error I wanted to check, there have been a few changes to the fastai libraries a few days ago Including conv_learner, looks like you are using the older versions, if so can you update the library and then re try.

(YJ Park) #4

Hi, @farlion you might want to find “self.linear = nn.Linear(4032, self.num_classes)” on line 548 in and modify it as follow:


Also, if you happen to copy the file from Cadene github, you might want to remove the code highlighted below because you would get another error (output size is too small because of pooling).


(Florian Peter) #5

Thank you both for your quick replies, you rock!

Unfortunately, haven’t gotten it running yet.
Fastai lib (from github repo) is at latest stage, and I’m using the default Paperspace setup with up-to-date conda env.

When I try the modification suggested by @YJP, I get the following:

Whereas when I replace with the Cadene github version and delete the line you suggest, I get

(YJ Park) #6


I am not sure :frowning:

The way I tried was using fastai as a base then just copied and pasted ‘def nasnetlarge’ function only from the Cadene github version (In this version, they do not have the first argument; only image). Then I changed ‘self.linear = nn.Linear(4032, self.num_classes)’ to 'self.last_linear = nn.Linear(4032, self.num_classes).

When you use the Cadene github version, please also check two places with the variable ‘num_classes’ in below:

I am not sure whether this is the appropriate way to resolve this issue but at least I found the model started to work in this way. Hope this works for you.

(Florian Peter) #7

Hmm thank you @YJP, I tried your approach, giving me the same error.
I then added the use_classifer=False param back, which got the model to download (with the new URLs from Cadene), but then it fails again with:

Shall we take this to github?

(Amrit ) #8

I was able to get the same errors that you and @YJP got. I did the following:
used this version of
I now get a different error:

RuntimeError: Given input size: (4032x7x7). Calculated output size: (4032x-3x-3). Output size is too small at /opt/conda/conda-bld/pytorch_1503965122592/work/torch/lib/THCUNN/generic/

Appreciate any thoughts on this

(YJ Park) #9

Good morning,

@amritv and @farlion

I haven’t seen @farlion’s error yet. Just to make sure:

  1. Copy the whole code from (69ffda8 Nov 19, 2017) to the current jupyter notebook
  2. Copy ‘url’: ‘’ from Cadene’s (795f371 on Dec 19, 2017) to the current jupyter notebook.
  3. Copy the code section for ‘def nasnetalarge’ from Cadene’s to the current jupyter notebook version.
  4. Change num_classes from 1001 to 1000.image
  5. Change self.linear to self.last_linear

I started to train the model and will see how it goes in terms of accuracy and loss. I think num_classes is the number of classes/classification that we want to predict so this may have to be adjusted once the model started to work. I will experiment to see whether adjusting this number makes difference.

Hope this works.

(Florian Peter) #10

Good late night :wink:

Thanks for the detailed steps @YJP, I reproduced them exactly, and still get

@amritv No idea yet, sorry. Sounds like it’s becoming time for a deeper dive :upside_down_face:

(Amrit ) #11

removing that line of code generates this error:

AttributeError: 'NASNetALarge' object has no attribute 'avg_pool'

Not sure how your code works without that line.

(YJ Park) #12

Hi @amritv

I think your is the version from Cadene’s github, which has the different code from fastai version.

Line 582 on your version has “def logits(self, features):” which includes “x = self.avg_pool(x)”.
For the same position in fastai version, it has the following code, which does not contain “self.avg_pool”:


Please refer to my previous post (my base is a fastai version) though farlion tried and still has a problem that I cannot replicate.

(Jeremy Howard) #13

@yjp thanks for your help with this. Perhaps you could post a gist with your version? And if you’re finding it’s working correctly, maybe even a PR?

(YJ Park) #14

Hi @jeremy,

I made it start to work but I am still training to see whether this model generates an appropriate accuracy and loss first (it seems this takes for a while).

Sorry, but I am not used to all these terms but what is a PR in this context? Pull Request? Thank you.

(Jeremy Howard) #15

That’s right. If you haven’t made one before, try hub. There’s some great posts linked from forum threads here with walkthoughs.

(YJ Park) #16

Hi @amritv,

This version is what I am currently trying to train:

I tried Jeremy’s nasnet.ipynb on dogs vs cats and it seems to work:

(Amrit ) #17

@YJP, you are right I was using Cadene’s, I switched to the version you stated and now the model is training :+1:

(Amrit ) #18

I have another question, as Nasnet large is slow and computationally expensive, do you think playing around with the cell sizes will help reduce computational costs?

I also wanted to confirm the in-channels_left and out_channels_left, are these image sizes?

self.cell_17 = NormalCell(in_channels_left=4032, out_channels_left=672, in_channels_right=4032, out_channels_right=672)

(YJ Park) #19

I am glad it worked for you.

Hello @farlion,

Are you still having an issue with If so, could you please try the version I posted in the github and let me know whether it works or not? Thank you in advance.

(Florian Peter) #20

@YJP unfortunately, yes :frowning:

Running on default paperspace, with your exact version of

My notebook:

Latest git commit is

commit 9d8e49a8f0afaedaf8fe53f8e1a94261b7730cc4
Author: Jeremy Howard <>
Date:   Sat Feb 3 02:49:20 2018 -0800

    bias false

…and conda env is up to date.

Thank you for your kind help!