Lesson 1 In-Class Discussion

tmux should be fine - try something like https://gist.github.com/ryin/3106801

1 Like

Awaiting for Lesson 2 may I ask is there an option to send a link to online stream via email? Forum might experience even bigger pressure this Monday/Tuesday. As it is the only communication media if its down there is no chance to get the link for International deep learning lovers and sleepless pilgrims.

2 Likes

Hey Everyone,

I am getting ImportError when I try to run the second cell of import statements on the Lesson 1 notebook and just can’t run it, no matter what, I am using the Paperspace Fast.AI Image and have up to date Github repository and also conda env update in the root of the repo, but I can’t figure out the problem till now. The error is related to the imports,

here’s the error message I get,

ImportError: dlopen: cannot load any more object with static TLS

Has anybody else encountered this issue, how can I solve it? Please help.

Complete stack trace of the error

You can find the complete stack trace of the Error by clicking on details below, if that helps.


---------------------------------------------------------------------------
ImportError                               Traceback (most recent call last)
<ipython-input-11-303e5e7a34f1> in <module>()
----> 1 from fastai.transforms import *
      2 from fastai.conv_learner import *
      3 from fastai.model import *
      4 from fastai.dataset import *
      5 from fastai.sgdr import *

~/fastai/courses/dl1/fastai/transforms.py in <module>()
      1 from .imports import *
----> 2 from .layer_optimizer import *
      3 from enum import IntEnum
      4 
      5 imagenet_mean = np.array([103.939, 116.779, 123.68], dtype=np.float32).reshape((1,1,3))

~/fastai/courses/dl1/fastai/layer_optimizer.py in <module>()
      1 from .imports import *
----> 2 from .torch_imports import *
      3 from .core import *
      4 
      5 def opt_params(parm, lr, wd):

~/fastai/courses/dl1/fastai/torch_imports.py in <module>()
      1 import os
----> 2 import torch, torchvision, torchtext
      3 from torch import nn, cuda, backends, FloatTensor, LongTensor, optim
      4 import torch.nn.functional as F
      5 from torch.autograd import Variable

~/anaconda3/envs/fastai/lib/python3.6/site-packages/torch/__init__.py in <module>()
     51 sys.setdlopenflags(_dl_flags.RTLD_GLOBAL | _dl_flags.RTLD_NOW)
     52 
---> 53 from torch._C import *
     54 
     55 __all__ += [name for name in dir(_C)

ImportError: dlopen: cannot load any more object with static TLS


googled here https://github.com/tensorflow/models/issues/523

my solution is put import cv2 above import tensorflow. I don’t know why the reason.

Wow, @abdulhannanali can you teach me how do you do this >Details pop up ?

Yup! I did google it and couldn’t find the solutions to the issue, I am not really using Tensorflow. I just updated my post with the Stack Trace. Can’t figure out really what’s the issue.

Type in <details>Your stuff in here to do this</details>

1 Like

Several breakthroughs happened on the 2-nd on November (heard weights for Tensorflow are already uploaded):

  • On ImageNet image classification, NASNet achieves a prediction accuracy of 82.7% on the validation set
  • NASNet performs 1.2% better than all previous published results
  • combining the features learned from ImageNet classification with the Faster-RCNN framework [6] surpassed previous published, state-of-the-art predictive performance on the COCO object detection task in both the largest as well as mobile-optimized models
3 Likes

Hi! I have a question about a part in lesson 1 notebook, where we define several(4) learning rates. How are they distributed among layers ? The notebook says that ‘early’ layers get the smallest learning rate but never explains what ‘early’ are. Why when we have 34 layers we only need to provide 4 learning rates ?
Also, when we do ‘learn.unfreeze()’ does it mean that we unfreeze all layers and train all of them applying smallest learning rate to earliest one because they already learned good weights, and bigger learning rate to deeper layers, which compute detect more complicated features because we probably need somewhat different complicated filters ? Is my logic correct ?
If so, is there a way to unfreeze a certain amount of layers in the model ? Say last 5.

In neural networks, initial layers or earlier layers learn the general purpose features like edges,corners etc. These are useful on other datasets evenif they are different from the dataset model was trained on. Thus they need less finetuning. Outer layers learn more complex features and needs higher learning rate.

learn.unfreeze unfreezes all the remaining layers. We only need 3 learning rates and not 34 learning rates as similar layers are grouped into layer groups. Learning rate is same amongst all layers in one layer group. Once you unfreeze and then supply only one learning rate, then the same value gets broadcasted and is applied to all layer groups.
I think if you want to train only the last two layer groups, then you freeze all the layer groups except the last two and train them.

3 Likes

I got that error. Fixed by removing all files from /cache/tmp.

1 Like

Apologies Chris, I was kind of busy for past 4 days. well, answering your question you can include your environment using -n like this “conda install -n environmentName” check the below example for more information.

https://conda.io/docs/user-guide/tasks/manage-pkgs.html#installing-packages

Has anyone run into this issue? Each time I run this cell, it gets most of the way done, and then hangs, and it lets me run other cells afterwards, but lrf is never set to anything, so I can’t progress. Any help/advice would be appreciated!

this means lr started increasing and there is no reason to continue testing (increase lr anymore).

Not sure if you already resolved it but just try this in paperspace:

import cv2
from fastai.transforms import *
from fastai.conv_learner import *

So, after running some notebooks on AWS instances, I thought of trying out Paperspace to see if I’d get speedups. Ran into this issue as well. I have a feeling that it has to do with load order and dependencies, kinda tricky to figure out when import * going on.

Curious if anyone else has run into this issue and might have fixed it ? ping @sermakarevich @abdulhannanali @jeremy

Edit: There’s existing threads around this discussion. Just ran into them and adding links here, for future ref:

1 Like

I am no longer using Paperspace, but try the suggested solution by @Deb above, but since @jeremy removed the dependency on cv2, have you tried git pull and ran the notebook again. Another option can be to do a scratch installation on their Ubuntu 16.04 image, it doesn’t have the bloat of a loaded desktop environment too, and you can use 16.04 version this way. You’ll only be able to SSH in the ubuntu image though :slight_smile:

Yup as of yesterday (with latest git pull) I still see the issue if I don’t import torch, cv2 ahead of everything else in paperspace. I’m able to proceed with the import. So, not a major issue for me.

I was revisiting Lecture 1 wherein I realized Jeremy’s approach to writing code. I missed it at that time (probably was on the forum at the moment). But his take is interesting.

His coding style is compact and uses names that are shortish. Reasoning being:

  • When code is compact, you can see everything that is happening, in 1 line or within a small region
  • Shorter variable names are a middle ground between what mathematicians use (single letters) and software engineers use (longer variable names)

He makes a distinction between interactive data science and software engineering. Those who are into the fastai codebase may find these 3 minutes and this insightful.

2 Likes

Thanks for noticing :slight_smile:

I’ve re-recorded the lesson 1 video to make it shorter, and to show how to use both Crestle and Paperspace. The new version is in the top post, or use this link: https://youtu.be/IPBSB1HLNLo

2 Likes