Nasnet.py changes


(Surya Mohan) #41

Yes. That is the same error I get. Thanks for following up @teidenzero


(Francesco) #42

@sumo I wish I could help but I’m getting the same error together with @lukeharries. I haven’t found trace of the issue anywhere else and it looks like it’s a consequence of some recent change so hopefully it will be addressed soon


(Dave Castelnuovo) #43

Hey Guys,

YJP posted a fix to this issue yesterday. go to your fast.ai folder and do “git pull” from the terminal

You may run into a merge conflict with some notebook files that have also been updated. This will cancel out the pull.

If this happens you may want to commit your changes locally and then merge. If you are not that familiar with GIT the most pain free way of keeping up with code changes is to make a duplicate of any notebooks you are working on so that you dont have to deal with merge issues.


(Francesco) #44

trying immediately, thanks


(Luke Harries) #45

git pull now fixes it! @teidenzero Must have been a recent merge as it didn’t previously.

Is it worth setting up some CI/CD with CircleCI to make sure there are no breaking changes merged in future?


(Luke Harries) #46

@teidenzero I’ve created a feature request for CI/CD with testing to be setup: https://github.com/fastai/fastai/issues/152

Give it a thumbs up to try and prevent this happening again


(YJ Park) #47

Hello,

First of all, I am truly sorry about the error as this stems from the pull request I previously made.

To fix this, since Jeremy updated fastai library so you will have to update it again.
To update it, as davecazz suggested, under ~/fastai directory, you can do the following command:

git pull

If you have any conflict message as follow:

image

then you can commit it locally by following the instruction below and do “git pull” again:

https://help.github.com/articles/resolving-a-merge-conflict-using-the-command-line/#platform-linux

Thank you for your patience.


(Luke Harries) #48

No worries at all @YJP. Happy to help setup the CI/CD if needed


(Amrit ) #49

This is based on the seedling data. unfreezing the layers leads to a substantial loss of accuracy and I saw the post earlier where not unfreezing may be beneficial on data that is similar to imagenet. However for the seedling challenge are there any thoughts why the numbers are skewed the other way?

Prior to unfreeze:

epoch      trn_loss   val_loss   accuracy                  
    0      0.543078   0.54388    0.913213  
    1      0.484357   0.424838   0.936952                  

[0.4248376, 0.9369517594575882]

After unfreeze

CPU times: user 0 ns, sys: 0 ns, total: 0 ns
Wall time: 6.91 µs
Epoch
100% 2/2 [10:52:34<00:00, 19577.06s/it]
epoch      trn_loss   val_loss   accuracy                       
    0      6.47271    480.077972 0.117191  
    1      6.442844   178.671402 0.10587                        

[178.6714, 0.10587002224517318]

(RobG) #50

Hi, I don’t have that problem with nasnet on seedlings. It runs fine. Perhaps some other hyper parameter is causing it, though the ones I used were very similar to those on other architectures. I notice you and I have the same leaderboard score- I wasn’t able to get nasnet to improve my score.


(Amrit ) #51

Nice one on the leader board :+1: not bad for our first Kaggle competitions but my score there is using resnet and Im trying to improve it by using nasnet. I have continued the training with nasnet without unfreezing to see how far it will go without over fitting.
Is your score with nasnet?


(RobG) #52

No, it was a touch lower and much longer to learn. And when ensembled with nasnet it didn’t improve. Some black grass and silky-bent images seem to be too similar for them to be distinguished except by chance on any one model build. Time to move on, for this deep learner.