will CAMVID work for segmentation of some medical image?
Transfer learning and fine-tuning refer to the same thing: using your pretrained model and adapting your weights to the problem at hand.
Thanks. Is the correct progression of sizes something we find by trial and error for other datasets?
You can transfer learning from camvid in the same way you can transfer from imagenet.
I am not sure, but I think camvid is a video dataset, so it could help you with 3D segmentation in medical images (I have seen this research before).
The idea of beginning with 128x128 then go to 256x256 should work well enough in many cases. if your original dataset permits it, even a last step at 512x512.
Can a CNN be applied to 3D pixels, or voxels? Such as in the case of a CAD file
Thank you, so do I have to do a new create_cnn with the new dataset? Wouldn’t that mean a whole new learner model?
Is there a reason we shouldn’t deliberately make a lot of smaller datasets to step up from in tuning? let’s say 64x64, 128x128, 256x256, 512x512 etc…
You can change the data object in the learner without changing the model at any moment with just learn.data=...
what about starting with small image size like 32px, is this better than starting with 128px?
Maybe we can train an image classifier using the graph produced by learn.lr_find() as input and Jeremy’s selected learning rate as the label
At some point, I imagine you would be losing too much information
May be we should train a CNN to see if it can learn the patterns in the LRFinder?
at which point? what’s the minimum?
Is there a good rule of thumb or ordering that helps decide which parameter to change when during a fine tuning exercise? For example, how many times should one refine the learning rate before changing the num of epochs and so on.
64x64 can a tad too tiny to begin with. If you can afford it in terms of compute, it’s best to use the data augmentation pipeline of the library to do your resize, since it will do so with only one interpolation, and make your result better than if you resized to 128x128 (for instance) then applied data augmentation.
What if we trained a deep learning model to automatically pick the best learning rate? Seems like a good group of people to create the dataset and we already have images to feed in. . .
That’s all the job of a deep learning practitioner
Jeremy gave some clues last week on how to know if you need to change your learning rate. We’ll see more tricks, but for the hyper-parameters tuning, we don’t have anything more than that to give, tricks, and then building your own experience of training models.
Is there a heuristic to batch size?Given GPU is not constraint should we always try to fit as big as possible batch size
How would prediction code look like for planets and CAMVID?