Lesson 2 In-Class Discussion

You would have to hack the fit code to do it inside.

1 Like

@yinterian Thanks. Thatā€™s what I was thinking. The same to change learning rates based on some threshold conditions, I guess?

You can save in between epochs.

1 Like

@yinterian With learn.save?

It depends on how late in the model you do pooling. If you do pooling in the first layer then it will be equal to scaling the input image. Looks like ResNet model deals with different sizes by doing Global Average Pooling in the end, before the fully-connected layer in the very end of the model. Global Average Pooling gets an average of each feature map of the previous layer. As the number of feature maps is fixed by the architecture, the output of the pooling layer has fixed size and doesnā€™t depend on dimensions of the feature maps. See https://stackoverflow.com/questions/44925467/does-resnet-have-fully-connected-layers

1 Like

@jeremy If you areļ»æ top-down data augmentation, can you spin it all 360 degrees? Not just flip horizontal or vertical.

For example, a biopsy slide could be oriented in any direction.
image

Yes, with learn.save

AWS :clap: :clap: :clap:

1 Like

what is the intuition behind how TTA works?

@yinterian I am sorry I am not getting this - the learning does not stop between epochs, to save between epochs, based on certain conditions, there must be some additional arguments for learn.save? Or something else?

You have to write some code to be able to do it.

My understanding is that it basically uses 4 different pulls from that image augmentation and then averages the answers together. So if it has a tough image to predict, hopefully the other 3 are better looks at the image.

3 Likes

How do we do the SSH with Cygwin? I have win 7 machine.

Try http://www.putty.org/ :slight_smile:

2 Likes

@jeremy For those of us with home machines that are designed for deep learning is there a list of packages that are installed in the ami?

I built a deep learning box after the last course and I want to get it setup equivalent to the ami.

install the environment based on the environment.yml file.

  1. Git clone the fastai repo.
  2. Run the following command (found that you will need a *nix system as some packages are not supported on windows) conda env create -f environment.yml
  3. activate the env & run pip install -r requirements.txt to install the python packages needed
  4. ???
  5. profit.
5 Likes

use that conda command that he did, conda env update There is also a way to initially create a conda environment, but I am not remembering it off the top of my head, hopefully somebody else remembers, but basically you point that to environment.yml file and it magically installs everything into a new environment. It is really slick and awesome.

Note: The conda env update must be done in the same directory as the environment.yml.

Any idea where to get the pretrained resnext50 model from? It doesnā€™t seem to be on model zoo.

Edit: Found this github project (https://github.com/clcarwin/convert_torch_to_pytorch) that can convert the model available here (https://github.com/facebookresearch/ResNeXt) to one fastai can use.

Thanks again @jeremy and @yinterian for this lesson. :clap:

I have a couple of general/ML questions, in the context of Deep learning.

  • I see that we picked up 20% of data for validation in one of the examples. What about things like cross-validation and things like k-fold validation ? Is it too much to compute perhaps ?

  • We are using Accuracy to measure model efficiency. Iā€™m assuming this is (True Positives+True Negatives)/Total. Just curious, is it uncommon/complicated to use other measures, things like Precision, Recall, PR curve, AUC(ROC curve) etc. ?

  • How do we deal with unbalanced classes ? (I think Jeremy mentioned a paper on balancing datasets.)

Thanks @charlielee this was very helpful. :slight_smile: