Yes. That is the same error I get. Thanks for following up @teidenzero
@sumo I wish I could help but I’m getting the same error together with @lukeharries. I haven’t found trace of the issue anywhere else and it looks like it’s a consequence of some recent change so hopefully it will be addressed soon
YJP posted a fix to this issue yesterday. go to your fast.ai folder and do “git pull” from the terminal
You may run into a merge conflict with some notebook files that have also been updated. This will cancel out the pull.
If this happens you may want to commit your changes locally and then merge. If you are not that familiar with GIT the most pain free way of keeping up with code changes is to make a duplicate of any notebooks you are working on so that you dont have to deal with merge issues.
trying immediately, thanks
git pull now fixes it! @teidenzero Must have been a recent merge as it didn’t previously.
Is it worth setting up some CI/CD with CircleCI to make sure there are no breaking changes merged in future?
Give it a thumbs up to try and prevent this happening again
First of all, I am truly sorry about the error as this stems from the pull request I previously made.
To fix this, since Jeremy updated fastai library so you will have to update it again.
To update it, as davecazz suggested, under ~/fastai directory, you can do the following command:
If you have any conflict message as follow:
then you can commit it locally by following the instruction below and do “git pull” again:
Thank you for your patience.
This is based on the seedling data. unfreezing the layers leads to a substantial loss of accuracy and I saw the post earlier where not unfreezing may be beneficial on data that is similar to imagenet. However for the seedling challenge are there any thoughts why the numbers are skewed the other way?
Prior to unfreeze:
epoch trn_loss val_loss accuracy 0 0.543078 0.54388 0.913213 1 0.484357 0.424838 0.936952 [0.4248376, 0.9369517594575882]
CPU times: user 0 ns, sys: 0 ns, total: 0 ns Wall time: 6.91 µs Epoch 100% 2/2 [10:52:34<00:00, 19577.06s/it] epoch trn_loss val_loss accuracy 0 6.47271 480.077972 0.117191 1 6.442844 178.671402 0.10587 [178.6714, 0.10587002224517318]
Hi, I don’t have that problem with nasnet on seedlings. It runs fine. Perhaps some other hyper parameter is causing it, though the ones I used were very similar to those on other architectures. I notice you and I have the same leaderboard score- I wasn’t able to get nasnet to improve my score.
Nice one on the leader board not bad for our first Kaggle competitions but my score there is using resnet and Im trying to improve it by using nasnet. I have continued the training with nasnet without unfreezing to see how far it will go without over fitting.
Is your score with nasnet?
No, it was a touch lower and much longer to learn. And when ensembled with nasnet it didn’t improve. Some black grass and silky-bent images seem to be too similar for them to be distinguished except by chance on any one model build. Time to move on, for this deep learner.