GAN without labels? (lesson 7)

Hi! I’m trying to pre-train the generator part of a GAN on images with labels and then train the GANLearner on non-labelled images. To my understanding, the discriminator of GAN provides the labels and thus the generator doesn’t need pre-made labels in the dataset. Is this correct?

In lesson 7 (image resolution improvement) I pre-trained the generator on a dataset with high quality pet images (labels) and their crappified versions. I then want to use a different dataset with only crappy images of animals to train the GAN. Shouldn’t this work since the discriminator’s job is to create the labels?

I couldn’t get it working. Because my final dataset doesn’t have labels, I assigned the same images as labels in the generator’s dataset to make the code run. The discriminator has the new crappy images and the old high quality images. As a result, GAN just keeps all images unchanged. If I try to leave out the generator’s labels, the code doesn’t run.

Any hints to guide me towards the right direction?


I’m not sure that I fully understand your needs but a GAN is composed by two models :

  • The generator which doesn’t need labels since it takes an image as an input and outputs an image
  • The discriminator which need labels since it is a classficiation model who takes and image as an input and outputs a label (real or fake)

You will always need labels for your discriminator. I suggest you to follow along this notebook and look at the datas that are provided to the generator and discriminator.

Thanks @polegar. I had previously thought that the discriminator gets shown a real image (high quality dog pic) and an unrelated fake image (low quality horse pic), and the discriminator shall distinguish which one is more likely a fake. So this is not how the discriminator works, and it will always need the correct label for each image?

As in, does the discriminator always want to compare these two (fake img and real label img):

and it cannot make the comparison between these two (fake img and unrelated real img):

Sorry for the confusing description. My ultimate goal is to use GAN for removing the background from images of cars. I’m pre-training the generator with the Carvana car dataset used in 2018 DL course lesson 14. However, the images in this dataset have a very standard set-up (always same background) so it doesn’t generalize well into images of cars in the real world. Therefore I’m trying to do the final training of the GAN with a real world dataset without labels.

If this is indeed not possible, I’ll need to go for a different approach and try to make the Carvana dataset more like real world by adding some natural background in the car images.

I think you don t need to use a GAN for this problem. If you want to remove the background it seems to be a segmentation problem not a generative one. Your model doesn t have to reconstruct an image without the background it just needs to output a mask for the car so you can use this mask to extract the car from the image without the background.

The problem you seem to describe is typically an image segmentation task, rather than a generative one. The go-to architecture is then a U-net rather than a GAN. More than that, it has been shown in last year course so you might want to check it out.

Here is the link of the notebook :
And the link of the lesson:

It is using fastai 0.7 so you need either to downgrade your version of fastai or to change the notebook to use the v1 version.

Thanks for the replies @Henri.C @NathanHub. I have tried the direct segmentation approach of the course18 lesson 14 but it failed since the dataset represents only a very narrow and standardized environment.


All Carvana images have the same background, so even though it works well for this dataset the results are significantly worse on real world images. This is why I wanted to try a GAN for the problem. Do you guys happen to know of any high-precision labelled car segmentation datasets with real world images?

you may use this dataset :