Lesson 1 In-Class Discussion

You need to do git pull https://github.com/fastai/fastai.git

The repo that I have here is not even a git repo.

Yes, this is called Transfer Learning.

In this case from your home dir you should do git clone https://github.com/fastai/fastai.git but first remove the existing fastai folder.

Iā€™ve had this error with the Ubuntu on Windows.
It looks like the ubuntu windows symbolic link is somehow different,
for me the solution was to delete and redefine the fastai link in the directory dl1 with command

rm fastai
ln -s ../../fastai

In CS231n Lecture 7 - Convolutional Neural Networks Andrej Karpathy says that ā€œNumber of filters in CNN are typically chosen as a powers of 2 (64/128/256/512/1024 etc) for computational reasons. Some libraries if they see powers of two, they might go into spacial subroutine which is very efficient to perform in vectorise form. So sometimes this might give you an improvement in performanceā€. (22 min of lecture)

No conclusions so far, just an information :slight_smile:

1 Like

Well spotted. I donā€™t think Iā€™ve seen that in practice, however.

1 Like

I donā€™t think so, also probably not a good idea, Crestle is an in browser environment and is for people who want to keep their lives simple by just learning through a notebook. Obviously, if you want to make your environment more customizable you should go for AWS, Paperspace or other cloud providers

tmux should be fine - try something like https://gist.github.com/ryin/3106801

1 Like

Awaiting for Lesson 2 may I ask is there an option to send a link to online stream via email? Forum might experience even bigger pressure this Monday/Tuesday. As it is the only communication media if its down there is no chance to get the link for International deep learning lovers and sleepless pilgrims.

2 Likes

Hey Everyone,

I am getting ImportError when I try to run the second cell of import statements on the Lesson 1 notebook and just canā€™t run it, no matter what, I am using the Paperspace Fast.AI Image and have up to date Github repository and also conda env update in the root of the repo, but I canā€™t figure out the problem till now. The error is related to the imports,

hereā€™s the error message I get,

ImportError: dlopen: cannot load any more object with static TLS

Has anybody else encountered this issue, how can I solve it? Please help.

Complete stack trace of the error

You can find the complete stack trace of the Error by clicking on details below, if that helps.


---------------------------------------------------------------------------
ImportError                               Traceback (most recent call last)
<ipython-input-11-303e5e7a34f1> in <module>()
----> 1 from fastai.transforms import *
      2 from fastai.conv_learner import *
      3 from fastai.model import *
      4 from fastai.dataset import *
      5 from fastai.sgdr import *

~/fastai/courses/dl1/fastai/transforms.py in <module>()
      1 from .imports import *
----> 2 from .layer_optimizer import *
      3 from enum import IntEnum
      4 
      5 imagenet_mean = np.array([103.939, 116.779, 123.68], dtype=np.float32).reshape((1,1,3))

~/fastai/courses/dl1/fastai/layer_optimizer.py in <module>()
      1 from .imports import *
----> 2 from .torch_imports import *
      3 from .core import *
      4 
      5 def opt_params(parm, lr, wd):

~/fastai/courses/dl1/fastai/torch_imports.py in <module>()
      1 import os
----> 2 import torch, torchvision, torchtext
      3 from torch import nn, cuda, backends, FloatTensor, LongTensor, optim
      4 import torch.nn.functional as F
      5 from torch.autograd import Variable

~/anaconda3/envs/fastai/lib/python3.6/site-packages/torch/__init__.py in <module>()
     51 sys.setdlopenflags(_dl_flags.RTLD_GLOBAL | _dl_flags.RTLD_NOW)
     52 
---> 53 from torch._C import *
     54 
     55 __all__ += [name for name in dir(_C)

ImportError: dlopen: cannot load any more object with static TLS


googled here https://github.com/tensorflow/models/issues/523

my solution is put import cv2 above import tensorflow. I donā€™t know why the reason.

Wow, @abdulhannanali can you teach me how do you do this >Details pop up ?

Yup! I did google it and couldnā€™t find the solutions to the issue, I am not really using Tensorflow. I just updated my post with the Stack Trace. Canā€™t figure out really whatā€™s the issue.

Type in <details>Your stuff in here to do this</details>

1 Like

Several breakthroughs happened on the 2-nd on November (heard weights for Tensorflow are already uploaded):

  • On ImageNet image classification, NASNet achieves a prediction accuracy of 82.7% on the validation set
  • NASNet performs 1.2% better than all previous published results
  • combining the features learned from ImageNet classification with the Faster-RCNN framework [6] surpassed previous published, state-of-the-art predictive performance on the COCO object detection task in both the largest as well as mobile-optimized models
3 Likes

Hi! I have a question about a part in lesson 1 notebook, where we define several(4) learning rates. How are they distributed among layers ? The notebook says that ā€˜earlyā€™ layers get the smallest learning rate but never explains what ā€˜earlyā€™ are. Why when we have 34 layers we only need to provide 4 learning rates ?
Also, when we do ā€˜learn.unfreeze()ā€™ does it mean that we unfreeze all layers and train all of them applying smallest learning rate to earliest one because they already learned good weights, and bigger learning rate to deeper layers, which compute detect more complicated features because we probably need somewhat different complicated filters ? Is my logic correct ?
If so, is there a way to unfreeze a certain amount of layers in the model ? Say last 5.

In neural networks, initial layers or earlier layers learn the general purpose features like edges,corners etc. These are useful on other datasets evenif they are different from the dataset model was trained on. Thus they need less finetuning. Outer layers learn more complex features and needs higher learning rate.

learn.unfreeze unfreezes all the remaining layers. We only need 3 learning rates and not 34 learning rates as similar layers are grouped into layer groups. Learning rate is same amongst all layers in one layer group. Once you unfreeze and then supply only one learning rate, then the same value gets broadcasted and is applied to all layer groups.
I think if you want to train only the last two layer groups, then you freeze all the layer groups except the last two and train them.

3 Likes

I got that error. Fixed by removing all files from /cache/tmp.

1 Like

Apologies Chris, I was kind of busy for past 4 days. well, answering your question you can include your environment using -n like this ā€œconda install -n environmentNameā€ check the below example for more information.

https://conda.io/docs/user-guide/tasks/manage-pkgs.html#installing-packages

Has anyone run into this issue? Each time I run this cell, it gets most of the way done, and then hangs, and it lets me run other cells afterwards, but lrf is never set to anything, so I canā€™t progress. Any help/advice would be appreciated!