On lesson 14, around 1h48’ in the video, Jeremy suggests pretraining with Imagenet a U-net down path with a classifier in the end.
Has someone done this?
@jeremy mentioned that Imagenet can be trained in less than 4 hours an USD25, but I am not sure if I try to do it my self I would be able to achieve this kind of results (I don’t even know if it is possible to achieve Superconvergence with U-net architecture). Anyway, is there a discussion or repo somewhere on this subject?
So what @jeremy means (I think?) isn’t training U-Net with ImageNet (since ImageNet doesnt offer segmentations anyways which is what you need in order to train U-Net). What the much more common practice is, is using a pretrained network (like VGG16 or InceptionV3 or ResNet, or …) as the encoder portion of U-Net. You then build the rest of U-Net on top of it. You can then specify the layers from the original pretrained model to be frozen or to train very slowly. I don’t use the fast.ai library much, but I have a Kaggle Kernel using Keras doing exactly this with the Carvana Masking Data (https://www.kaggle.com/kmader/vgg16-u-net-on-carvana).
I have been thinking about this for some time. I thought @jeremy meant that we form our encoder base for Unet. Plug a classifier on top of unet. Then train that encoder+classifier for classification with Imagenet. After training, unplug the classification layer from the encoder base. Plug the decoder section of your Unet architecture. Then train that for your own data.
Am I incorrect in that? This was the same strategy of darknet right?@radek @sgugger
That is exactly what was suggested in lesson 14. I thought someone had done this.