00:00 - Setting up Paperspace - Clone fastai/paperspace-setup 02:30 - pipi fastai & pipi -U fastai 03:43 - Installing universal-ctags: mambai universal-ctags 05:00 - Next step: Adding a normalization to TIMM models 06:06 - Oh! First lets fix pre-run.sh 07:35 - Normalization in vision_learner (with same pretrained model statistics) 09:40 - Adding TIMM models 13:10 - model.default_cfg() to get TIMM model statistics 16:00 - Lets go to _add_norm()… adding _timm_norm() 20:30 - Test and debugging 28:40 - Doing some redesign 32:23 - Applying redesign for TIMM 36:20 - create_timm_model and TimmBody 38:12 - Check default config from a TIMM models 39:05 - Making create_unet_model work with TIMM 40:20 - Basic idea of U-nets 41:25 - Dynamic U-net 48:00 - fast.ai convolutional layer 49:00 - Figuring out what would need to be changed 51:45 - Is anything unique about the fact that TIMM models cut the head and tail off? 53:10 - The Layers Notebook doesn’t work 54:04 - Can we still predict rice disease? 55:28 - Is it possible to create a layer that learns normalization? 56:11 - When we fine tune, basically normalization doesn’t really matter 57:25 - Question about U-net on mobile app inference 59:16 - Slightly better error initially, but there is no difference as it trains 01:00:37 - Any question? 01:01:30 - Why normalization used to matter a lot? 01:02:08 - Asking François about fine tuning Keras models
Is there some custom ipython file that I need for PS? I have set up using the paperspace setup but notebooks is not looking at the system path:
Notebooks:
In this live coding we are jumping straight into how to use the proper stats for timm models without introducing the fact that we were not using those stats up until now. So just to be sure, I wonder if there was a live coding session from June 29th that wasn’t released?
I think this is Session 18. I don’t think Session 17 has been posted because at the end of it Jeremy was working on getting the pre-run.sh script to run. Thu session’s end discussion was about UNET modification etc.
In fact, I had been using Paperspace daily with the setup we had established throughout the live coding sessions up until now. After cloning and running the script from fastai onto a new instance I can replicate everything from the video but I notice that the site-packages that had been in .local on another instance for python3.9 are no longer there and that packages are now being installed into python3.7 yet my ipython version is still mapping to python3.9??
I’ve spent about an hour trying to fix the problem but maybe there is a 2 minute fix someone here can help with? Why are we reverting back to 3.7 for the paperspace setup?
A similar issue was identified in Session 2 but it wasn’t resolved at this point and the apparent conflict introduced by running the fastai/paperspace-setup script isn’t the same. In my case ipython is already 3.9 and python is mapping to version 3.7. Somewhere I’m sure it was addressed but I haven’t found it.
I think the issue is that jupyter uses a kernel.json from /opt/conda/share/jupyter/kernels/python3/ by default. When I changed the argv path to “/storage/cfg/.conda/bin/python” it seemed to use the correct version and the persistence of the installed packages also worked correctly.
Not sure how to force the default kernel to “/storage/cfg/.conda/share/jupyter/kernels/python3”
Finally getting around to catch up with the live coding sessions. I really liked this session, in particular the refactoring process around getting timm stats used for normalisation.
A slightly detailed note built upon the video timeline. There is so much content in this session and it deserves a lot of revisits.
00:00 - Setting up Paperspace - Clone fastai/paperspace-setup. If you can’t use terminal in the way Jeremy did in the video, then go into Jupyter lab to use terminal. How to install with mambai and pipi?
02:30 - How to check whether everything is set up properly? How to update the latest fastai? pipi fastai & pipi -U fastai .
23:08 Why the string version of vision_learner does not work? because create_timm_model creates a head and a body which are sequential, and vision_learner does not like sequentials there.
24:19 How did Jeremy step by step to fix the sequential problem of vision_learner? How Jeremy move onto the decision of changing how vision_learner work?
28:40 - How did Jeremy step by step figure out the redesign of vision_learner by changing how to create_body? Jeremy changed create_body to accept model instead of arch. Also Jeremy made create_vision_model to accept model inside rather than just arch.
32:23 - How did Jeremy apply the same design pattern for TIMM, i.e., TimmBody and create_timm_model.
35:26 What about those keyword arguments we pass onto timm model? How did Jeremy take care of this **kwargs for timm models?