Wiki: Lesson 1

I use AWS for other things and so I already had an account with all the billing things configured.

From a Sagemaker point of view I do not know enough to tell you if it’s better than Paperspace.

There are many “short-cuts” that AWS provides for you that may reduce the time spent in maintaining an ML network. Also with the new pricing reductions for a p2.xl is $0.90 an hour.

You might find that you’ll develop on Paperspace but then run production workloads on AWS.

Thanks, will probably start with Paperspace and see how Sagemaker comes along for a while

I can share the Lesson 1 and 2 work on the ml.p2.xl machines, granted its a little slow at times.

The other thing to note is that AWS is one of the 7 cloud companies that get Pre-Release CPU/GPU/FPGA before the rest of the marketplace.

If you need the cutting edge of computing AWS may be that location.

I would also like to see someone try it in AZURE.

@sayko still haven’t been able to make it work

Hi Sir,

I am still confused on “sz”. Let say sz = 224, does it means that it will reduce the resolution or crop the image if image size is greater 224 * 224 pixels? And how it will change if have a different dataset (medical images, satellite images etc).

By default, it just reduces (or increases) the resolution, but you have the option of applying crops, zooms, stretching, rotations etc

And yes, the sort of transforms you apply depends on the dataset. Sorry if that’s a bit vague!

2 Likes

Thank you so much. Actually my second question was if we have a medical images dataset or satellite images should i have to decrease or increase sz.

Thanks for doing this for all of us Jeremy & Rachel!

I have two questions re Lesson 1:

Regarding the Cyclical Learning Rate paper: Is there a way we can use this method to determine the optimum learning rate even when our loss function isn’t SGD? For example, what if our loss function is a combination of two different losses or what if it’s something like Entropy loss?

Where to put the sample images for the homework assignment: In the video, Jeremy asked us to put a few images from 2 classes of our choice and train the network on those classes. In the data subfolder, we already have the dog and cat subfolders. Do we remove that and put in our new image class folders? If we don’t, then the network is trying to classify 4 different categories right?

1 Like

Sorry, can only help with question 2:

  1. create a separate folder in the “data/” directory
  2. point the PATH variable (it is being set in the 4th code-cell of the notebook) to your new folder. so instead of PATH = “data/dogscats/” it should then say e.g. PATH = “data/myimages/”. Don’t forget to run the cell.
  3. make sure that the folder contains the structure as mentioned in the lecture and as is in the dogs cats example (train, valid folders with substructure).
  4. The notebook will basically run completely with your dataset now. (there are a few hardcoded things like looking at cat pics first, there you will also have to adjust the path manually…)
3 Likes

Couple dumb questions about stuff mentioned in first video:

  1. Universal Approximation Theorem is mentioned that requires exponential size network, but then backpropagation helps with that. Is there a version of the theorem that says how much backprop helps? E.g. does it become polynomial?

  2. Learning rate finder is reminiscent of old fashioned numerical root finders and the like, used in calculators and desktop programs. There’s a famous article by W. Kahan about the HP-34C solver from 1979: http://www.hpl.hp.com/hpjournal/pdfs/IssuePDFs/1979-12.pdf (starts at page 20 of the pdf). Is this similar? Is traditional numerics much help in machine learning?

  3. Similarly is it reasonable to find the minimum by numerical differentiation and then looking for derivative = 0 with a traditional root finder?

  4. The demo showing the different layers of a DNN recognizing features showed a layer recognizing circles. But since the input is a 3x3 grid would that actually recognize circles of only a specific size? Do actual deep learning algorithms manage to to recognize shapes like circles regardless of their size? Does anyone train on Fourier transforms of the input images, or anything like that?

Sorry to be so low level early in the course, the opposite of the advice about going top-down. Those issues just jumped out at me.

The course looks great, thanks a million for doing it.

anyone tried to run this on Kaggle? I tried to run this on Kaggle but it fail on ConvLearner.pretrained(). It complains about failing to download the model.

Then I change to use ConvLearner.lsuv_learner(). Is that the correct thing to do?

1 Like

About: Understanding fast.ai code.

I recently came across the ‘sched’ method run on the object returned by the ConvLearner.pretrained method (the ConvLearner.pretrained method suggests that the returned object comes from a cls method:

I want to understand the method ‘sched()’ that is applied to the returned object using the cls method above. So if I were to use the ‘??’ jupyter notebook shortcut to accessing code docs, how would I reveal the ‘sched()’ method? I tried the following:

??ConvLearner.pretrained.sched
to which I get output: Object ConvLearner.pretrained.sched not found.

Thanks,
Rahim

Thanks Marc!

Can you let me know how are you downloading images for this exercise?

Hi, I am trying to run the code in lesson 1 and am getting a cuda error. It seems after it runs fit it is not releasing the cuda resource. So when I run the cell:

arch=resnet34
data = ImageClassifierData.from_paths(PATH, tfms=tfms_from_model(arch, sz))
learn = ConvLearner.pretrained(arch, data, precompute=True)
learn.fit(0.01, 2)

I get the error:

RuntimeError: cuda runtime error (46) : all CUDA-capable devices are busy or unavailable at /opt/conda/conda-bld/pytorch_1518244421288/work/torch/lib/THC/generic/THCStorage.cu:58

How can I solve this?

Hi Adam,
Never seen this myself, but you don’t seem to be the only one with this type of error.

Maybe that helps, see last comment about switching modes of your GPU:

Other than that are you aware you are running multiple processes that use the GPU?
I am not sure what to make of this SO-Answer but maybe otherwise keep looking in that direction:

In general, just googling for your error message will often help you find the solution. It’s what I did with your message above.

This may be related to some GPU memory is consumed by supporting display, especially 4K+ monitor. You may want to separate display and computer into different cards

The image size is tied to your particular problem and the computing power you have. Large image size needs more GPUs (memory) and will be slower in training. However, if you downsize the images too much, the important features may get lost. Medical images typically need higher resolution than other samples.

This V2 is very different from V1 since it enforces more on learning the high-level picture (top-down approach) by abstracting more implementation into fast.ai library. It is not a course to learn how to use TF, Keras, etc.

Is there more information (maybe a paper/post and author) about the epithelial/stroma classifier mentioned at 47:18? I’m interested in further reading about this.