📝 Deep Learning Lesson 3 Notes

A question came to my mind when revieiwng this lesson, if you do the resizing that was done by Jeremy for planets (playing between 128 to 256 res) on the camvid pictures, how would you manage the segmentation labels ?

For example if the original hi-res dataset was only 256 images and we’d lower the resolution to 128, how would we know if pixel (1,1) should be labeled as the (1,1) or the (1,2) of the 256 picture ?

Is there any discussion on that topic ?

1 Like

If I understand you correctly, this has to do with something called “type dispatch”. In fastai all the transforms that can be applied to a certain type of data are automatically applied to all data that matches that type. So for any image transforms, this is applied both to the image itself, as well as the segmentation mask. So that it all “lines up” when doing any resizing / warping …

Edit: haha just saw that this message was three years old, no idea why it showed up in “unread messages”