Lesson 7: Bedroom WGAN

I was watching lesson 7 yesterday and going though the wgan lesson notebook (https://nbviewer.jupyter.org/github/fastai/course-v3/blob/master/nbs/dl1/lesson7-wgan.ipynb). I could successfully run it, but I’d be very nice to generate higher resolution images (1024x1024).

I’ve been investigating and generating something like that using NVIDIA’s GAN open source project needs a huge amount of resources.

Is there some way to achieve something like that with the fastai library and less resources (I have a GTX1080 GPU).

Thanks!

To reduce memory needs try mixed precision training (.to_fp16()) and lower your batch size as low as necessary. If you find the batch size too low to get stable gradients, you will want to accumulate gradients over multiple batches before updating your weights. That would mean you need to write a callback that implements on_step_end so that zero_grad is not called automatically, but manually by you in this callback, which is how you can regulate the actual batch size used for weights updates.

This way you can get around GPU memory restrictions somehow… But that means you have to run multiple forward and backward passes for a single weight update, having to do way more iterations and thus, training for a long time.

This “getting super results with few resources in fastai” thing (as you probably have heard in the context of the DawnBench competition) is not exclusive to the fastai framework… It’s using smart techniques and most of them are easily accessible in the fastai framework (like mixed precision, one cycle etc.) Using those smart techniques easily without writing lots of code is probably exclusive to fastai.

The problem is that except for the mixed precision trick, most of the tricks that reduced the need for large resources in DawnBench cannot be applied easily to GANs.
As far as I am aware the one cycle schedule is not useful at all for gans (since you are alternating your optimization every step/few steps). Neither is mixup or progressive resizing (well, that is what nvidia is doing anyways there, assuming you mean the progressive GANs paper).
You can try to use transfer learning, as is done in lesson 7 for GANs, however it is not clear how to apply this in a sensible way beyond the first image size when progressively growing.

I’m afraid using fastai over plain pytorch will not help you a lot with the progressive gan in terms of resources.

transfer learning will cut down your training time by 1/100 iirc from lesson 1-2, I made a repo for stylegan by nvidia which I think is the open source project you are refering to. I was able to train portrait art, ctscans, cartoons, etc using transfer learning. See the repo here https://github.com/ak9250/stylegan-art

2 Likes