Lesson 3 Advanced Discussion ✅

yes thats it

1 Like

Hello everyone,
Are there any example code for creating custom dataloader which has used with fastai.vision library for classification?

1 Like

Hey all,
is there a way to make the fastai library use multiple GPUs?

1 Like

Hi,

I’m doing image segmentation and have an issue where y or mask image has values 0 or 255, it means there are two classes. I want to change the value of the y or mask image and make it between [0-1]. I need simple divide y to 255. Anyone knows is there an example of how to change the value of y?

Never mind found src.datasets(SegmentationDataset, classes=codes, div=True)

Found something about multi-GPU usage in How to use multiple gpus I will try it later to day with the lesson 3 CamVid notebook.

Is there anything special about the structure of the U-net that makes it specific for segmentation, or is just a more complex conv net that yields better results? Could we use unet for image classification?

The U-net does upsampling so that the height and width of the output is the same as the height and width of the input image. This part is unnecessary for classification. Though you could probably use upsampling in an architecture for classification, as long as you downsampled again. See here for an example https://www.kaggle.com/c/human-protein-atlas-image-classification/discussion/70478

1 Like

I can attest that in my V100 instance with latest 410 driver and CUDA 10, training after running to_fp16() on the learner still kills the kernel.

Hm, that’s interesting. Then probably the problem is somewhere else. I was thinking that updating to the most recent drivers should help.

Was anybody able to successfully run mixed-precision training so far?

Yes, updating the driver and CUDA would not help hear. I am not sure if updating cuDNN will help. If you have spare time to experiment, please let me know.

HI, It might be a silly question but I would really like to understand how we decide what we should use create_unet or create_cnn and what are the expectations from these methods.
My understanding is create_unet will reduce and then increase the weights/layer size where as create_cnn doesn’t have any structure to follow as such.

After looking at the code I find that the Unet is another variant of cnn.

Please correct my understanding if I am wrong and if you have more intuitive understanding please share.

Unet is specifically designed for segmentation task, use create_unet for segmentation task only

I have notices that DynamicUnet is using F.interpolate to do the upsampling instead of transpose convolution, but just at the end there is a transpose convolution. Can someone please give me some light on why was this the choice (interpolate instead of transpose convs) and why we are still using a tranpose conv at the end.

I’m doing some segmentation on 3D medical images and want to incorporate this good practices. Thanks :slight_smile:.

Can I use K-nearest neighbors to impute missing values? I think it could be a cool addition to the library when used on tabular data.

1 Like

I’m not sure did someone already ask this question but here it is again in that case. How we first create model using 128 images and then just use same model with images size of 256? We didn’t make any changes so what is happening inside Fastai library?

Hi All,

if I want to do classification based on Urls to malicious or clean, how can I use Nlp with transfer learning as I need charecter level embeddings as I think I can’t apply word level embedding on Urls any Ideas??

learn.model = torch.nn.DataParallel(learn.model)

1 Like

I am working on pascal voc dataset 2012 challenge for semantic segmentation problem. This problem consists of 21 classes. Each pixel is class index labeled as background=0,classes from 1-20 index values, and 255 index value for void class. For more information on all of these classes: http://host.robots.ox.ac.uk/pascal/VOC/voc2012/segexamples/index.html
I was working on this dataset using pytorch using fully convolutional networks: https://arxiv.org/abs/1411.4038
I used ‘’‘torch.nn.CrossEntropyLoss()’’’ as my loss function. In this loss function we have an attribute called as ignore_index. This attribute was allowing me to ignore the index value 255 present in the image. My problem is how can I ignore this 255 pixel value in fastai ?

I have question…
I want to train Resnet34 using Gray scale images .How i can do that…
The way i can think of is

  1. Modify the input channel from 3 to 1 in the head block of resnet34

If this is the way then i dont know which FAI file should i modify to do that or is there any other way available to do