Is there a UNet example using fastai2 anywhere?

I didn’t see an example in github, and was wondering if anyone had an example of how to create this type of network in fast AI 2.

Look at the CamVid example under course/.


Thanks! I’m not sure how I overlooked that one.

Hi. Thanks for pointing to the notebook. I have a question. I got this error

AssertionError: `n_out` is not defined, and could not be inferred from data, set `dls.c` or pass `n_out`

I guess this is probably for the number of output classes. So can you tell me if I can use this model for image reconstruction instead of semantic segmentation as mentioned in the notebook?

c_out is what it should be now


I’m sorry but I didn’t get c_out. That’s the error it was printing. I’m using fastai v2. Did I get anything wrong then? I installed it through anaconda

Oh yes sorry I mis-read. You should assign a dls.c value equal to the number of classes, as the error states. (I forget why this isn’t done automatically)

1 Like

Ah, alright. :sweat_smile:

But I don’t want to classify anything. I am doing image reconstruction. Can I use UNet for that purpose?

1 Like

Hi. Did you figure out the answer to your question?
I also want to use the UNet for image construction but FastAI is forcing me to assign a n_out in the learner which does not make sense in this case. My input and output blocks are both PILImageBlock.

1 Like

I am facing issues with unet aswell. Im asking it to normalize images but its only normalizing input images and not masks. I also keep getting cuda ran out of memory error on colab. The camvid tutorial doesnt really clear all the doubts

My issue was solved by setting n_out to 3. It basically means that we are asking the model to generate a 3-channel image for us (like an RGB image).

For your case, I think you can normalize your input and output image the same way if you define both your input and output blocks as “ImageBlock”.

CUDA memory errors are usually solved by reducing the batch size, reducing the image dimensions or using a simpler model. If you don’t want to do these things, you should try training on multiple GPUs.

1 Like