Live coding 17

This topic is for discussion of the 17th live coding session.

<<< session 16 session 18 >>>

Links from the live-coding

What was covered

  • Paperspace quick setup
  • Normalization
  • Getting model statistics

Video timeline

00:00 - Setting up Paperspace - Clone fastai/paperspace-setup
02:30 - pipi fastai & pipi -U fastai
03:43 - Installing universal-ctags: mambai universal-ctags
05:00 - Next step: Adding a normalization to TIMM models
06:06 - Oh! First lets fix pre-run.sh
07:35 - Normalization in vision_learner (with same pretrained model statistics)
09:40 - Adding TIMM models
13:10 - model.default_cfg() to get TIMM model statistics
16:00 - Lets go to _add_norm()… adding _timm_norm()
20:30 - Test and debugging
28:40 - Doing some redesign
32:23 - Applying redesign for TIMM
36:20 - create_timm_model and TimmBody
38:12 - Check default config from a TIMM models
39:05 - Making create_unet_model work with TIMM
40:20 - Basic idea of U-nets
41:25 - Dynamic U-net
48:00 - fast.ai convolutional layer
49:00 - Figuring out what would need to be changed
51:45 - Is anything unique about the fact that TIMM models cut the head and tail off?
53:10 - The Layers Notebook doesn’t work
54:04 - Can we still predict rice disease?
55:28 - Is it possible to create a layer that learns normalization?
56:11 - When we fine tune, basically normalization doesn’t really matter
57:25 - Question about U-net on mobile app inference
59:16 - Slightly better error initially, but there is no difference as it trains
01:00:37 - Any question?
01:01:30 - Why normalization used to matter a lot?
01:02:08 - Asking François about fine tuning Keras models

7 Likes

Here is my attempt at adding timm models to unet_learner:

6 Likes

does Jeremy use micromamba now instead of mamba?
https://mamba.readthedocs.io/en/latest/user_guide/micromamba.html

1 Like

As far as I remember. It was always micromamba (for paperspace setup).

1 Like

Just for paperspace. Mamba requires a full 2nd python install!

2 Likes

Is there some custom ipython file that I need for PS? I have set up using the paperspace setup but notebooks is not looking at the system path:
Notebooks:


System:

Thanks

In this live coding we are jumping straight into how to use the proper stats for timm models without introducing the fact that we were not using those stats up until now. So just to be sure, I wonder if there was a live coding session from June 29th that wasn’t released?

I think this is Session 18. I don’t think Session 17 has been posted because at the end of it Jeremy was working on getting the pre-run.sh script to run. Thu session’s end discussion was about UNET modification etc.

2 Likes

You know how we have a new script for setting up Paperspace using GitHub - fastai/paperspace-setup: Setup a paperspace instance for fastai well I’m now having issues with previous instances that had been working.

In fact, I had been using Paperspace daily with the setup we had established throughout the live coding sessions up until now. After cloning and running the script from fastai onto a new instance I can replicate everything from the video but I notice that the site-packages that had been in .local on another instance for python3.9 are no longer there and that packages are now being installed into python3.7 yet my ipython version is still mapping to python3.9??

I’ve spent about an hour trying to fix the problem but maybe there is a 2 minute fix someone here can help with? Why are we reverting back to 3.7 for the paperspace setup?

2 Likes

A similar issue was identified in Session 2 but it wasn’t resolved at this point and the apparent conflict introduced by running the fastai/paperspace-setup script isn’t the same. In my case ipython is already 3.9 and python is mapping to version 3.7. Somewhere I’m sure it was addressed but I haven’t found it.

I think the issue is that jupyter uses a kernel.json from /opt/conda/share/jupyter/kernels/python3/ by default. When I changed the argv path to “/storage/cfg/.conda/bin/python” it seemed to use the correct version and the persistence of the installed packages also worked correctly.

Not sure how to force the default kernel to “/storage/cfg/.conda/share/jupyter/kernels/python3”

1 Like

It wasn’t addressed - it’s a problem with the paperspace container.

There was a very low res recording as ‘live coding 17’ for a couple of hours then I believe it changed.

Finally getting around to catch up with the live coding sessions. I really liked this session, in particular the refactoring process around getting timm stats used for normalisation. :raised_hands:

2 Likes

A slightly detailed note built upon the video timeline. There is so much content in this session and it deserves a lot of revisits.

00:00 - Setting up Paperspace - Clone fastai/paperspace-setup. If you can’t use terminal in the way Jeremy did in the video, then go into Jupyter lab to use terminal. How to install with mambai and pipi?

02:30 - How to check whether everything is set up properly? How to update the latest fastai? pipi fastai & pipi -U fastai .

Images

How to check whether fastai is installed in the right place?

Images

How to check fastai version

Images

03:43 1 How to install ctags and check its version and installation location?

mambai universal-ctags

Images

05:00 - Adding a normalization to TIMM models

06:06 - Fix pre-run.sh by adding popd in the end

07:35 - Find Normalize class from data/transforms.py.

How to find the Normalize in fastai source with rg?

Images

How to calculate normalization for vision? Do we need to use the pretrained model’s mean and std for normalization when we do fine-tuning? Why?

08:28 Go to vision/learner.py, and how does vision_learner add normalization of the pretrained model?

Images

Where does _add_norm get the pretrained model’s stats (mean and std) from?

Images


The current vision_learner does not have the TIMM meta data for stats

09:40 - Search around to find the stats of TIMM, but no luck yet.

vision_learner use timm as an option

Images

12:16 How to create a TIMM model from vision_learner?

Images


12:54 How to search TIMM models

Images

13:10 - Where can we access a TIMM model’s stats e.g., mean and std? model.default_cfg() to get TIMM model statistics

Images

How to access the mean and std of all pretrained models of TIMM?

Images

16:00 - Lets go to _add_norm()… adding _timm_norm()

How to add the stats of a TIMM model to Normalize as a transform for after_batch?

Images


Let’s look at the vision_learner with TIMM model and stats kept

Images

20:30 What would happen when we do dls.add_tfms([tfm], 'after_batch')? How did Jeremy find out about it step by step?

Images



23:08 Why the string version of vision_learner does not work? because create_timm_model creates a head and a body which are sequential, and vision_learner does not like sequentials there.

24:19 How did Jeremy step by step to fix the sequential problem of vision_learner? How Jeremy move onto the decision of changing how vision_learner work?

28:40 - How did Jeremy step by step figure out the redesign of vision_learner by changing how to create_body? Jeremy changed create_body to accept model instead of arch. Also Jeremy made create_vision_model to accept model inside rather than just arch.

32:23 - How did Jeremy apply the same design pattern for TIMM, i.e., TimmBody and create_timm_model.

35:26 What about those keyword arguments we pass onto timm model? How did Jeremy take care of this **kwargs for timm models?

38:12 - Check default config from a TIMM models

39:05 - A challenge for student: Making create_unet_model work with TIMM

40:20 - Basic idea of U-nets

41:25 - Dynamic U-net walkthru

48:00 - fast.ai convolutional layer ConvLayer

49:00 - Figuring out what would need to be changed

51:45 - Is anything unique about the fact that TIMM models cut the head and tail off?

53:10 - The Layers Notebook doesn’t work and get it fixed.

54:04 - Can we still predict rice disease with the updated vision_learner?

55:28 - Is it possible to create a layer that learns normalization?

56:11 - When we fine tune, basically normalization doesn’t really matter

57:25 - Question about U-net on mobile app inference

59:16 - Slightly better error initially, but there is no difference as it trains

01:00:37 - Any question?

01:01:30 - Why normalization used to matter a lot?

01:02:08 - Asking François about fine tuning Keras models

4 Likes