Lesson 7 further discussion ✅

#25

Hi, I would like to recheck with you the Model0 from the Human Numbers resource.

The code for Model0 on video differs from the code in Jupiter notebook with a tiny bit:

  • if x.shape[0]>1: (in video)
  • if x.shape[1]>1: (in Jupiter notebook)

As it turns out, I may set if True:, or I may completely remove the if branches and the result will be the same.

I created the counter to check how many times we enter the branch like this:

class Model0(nn.Module):
    def __init__(self):
        super().__init__()
        self.i_h = nn.Embedding(nv,nh)  # green arrow
        self.h_h = nn.Linear(nh,nh)     # brown arrow
        self.h_o = nn.Linear(nh,nv)     # blue arrow
        self.bn = nn.BatchNorm1d(nh)
        self.counter0 =0
        self.counter1 =0
        self.counter2 =0
        
    def forward(self, x):
        self.counter0 +=1
        
        h = self.bn(F.relu(self.i_h(x[:,0])))
        if x.shape[0]>1:            
            self.counter1+=1
            h = h + self.i_h(x[:,1])            
            h = self.bn(F.relu(self.h_h(h)))
           
        if x.shape[0]>2:
            self.counter2+=1
            h = h + self.i_h(x[:,2])
            h = self.bn(F.relu(self.h_h(h)))
        return self.h_o(h)

As it turns out these lines:

print(x.shape)
print(m.counter0)
print(m.counter1)
print(m.counter2)

Will return:

torch.Size([64, 3])
1974
1974
1974

Any feedback?

0 Likes

(Pierre Guillou) #26

Hi @jeremy. In the lesson 7, you show in the lesson7-wgan.ipynb notebook how to generate fake images of bathroom by training a WGAN.

The training set you use has 303125 images and you train your GAN within 30 epochs with a lr of 2e-4.

I did try to use the exact same code with mango images from ImageNet dataset that has only 1305 images (and about 500 after cleaning).

However, even after 100 epochs, the result is bad. I guess my issue is the size of my training dataset?
With you experience, what would be the minimum size for the training dataset of a WGAN? And how to choose the right lr? Thank you.

After 100 epochs (lr = 2e-4)

After 100 epochs (lr = 2e-3)

Databunch

3 Likes

#27

Any feedback on this tiny little issue?

0 Likes

(Thomas) #28

I would like to know how to use the models exposed in the human-numbers notebook to generate predictions.
As I can see, the batch_size is hard coded in the model, so If I want to predict a batch of size 1 (one prediction) it is not possible.
I am building an equivalent model to forecast time series, so given the N past values I would like to predict value N+1

0 Likes

#29

I’m curious to know if anyone tried to pass that black hole picture through a superres network…

0 Likes

(Andreas) #30

I tried it, and the result is not too impressive:

However, it makes sense. The loss function was based on a pretrained ResNET34, where the training data didn’t have a lot of black hole images (or anything remotely similar). Training a model on pictures of planets, stars, etc. instead of cats might help too :stuck_out_tongue:

(It did however work extremely well on cats.)

3 Likes

#31

Thanks for trying!

0 Likes

(Divyanshu Sharma) #32

Has any one used models from this GAN zoo https://github.com/eriklindernoren/PyTorch-GAN ? Is the implementation reliable and error free ?

0 Likes

#33

Hi,

In the notebook lesson7_superres, I am looking at the gram_matrix function output and have a hard time understanding why a certain result occurs on the diagonal. If I have a single unit vectors v_1 and and I do it’s inner product with itself <v_1, v_1>=cos\theta, knowing that the angle is 0, the value should be 1, mainly, the diagonal should be full of ones. Why is that not the case ?

def gram_matrix(x):
    n,c,h,w = x.size()
    x = x.view(n, c, -1)
    return (x @ x.transpose(1,2))/(c*h*w)

gram_matrix(t)
> tensor([[[0.0759, 0.0711, 0.0643],
           [0.0711, 0.0672, 0.0614],
           [0.0643, 0.0614, 0.0573]],

          [[0.0759, 0.0711, 0.0643],
           [0.0711, 0.0672, 0.0614],
           [0.0643, 0.0614, 0.0573]]])

Thank you in advance for your help :slight_smile:

0 Likes

#34

sorry didnt look at the competition, but reading Jeremy’s comment…can spark be helpful in this situation? downsample?

0 Likes

#35

Does anyone know where the lesson 7 Kaggle notebooks for superres, superres-imagenet, superres-gan and wgan are?

0 Likes

(Levi Ritchie) #36

I suspect, beyond just regular pictures of planets and stars, you’d need a large set of high-resolution images of celestial objects taken with the same kind of black hole spectroscopy. Especially gaseous objects, since I think that’s what we’re seeing in the black hole picture. Then, rather than blurring them with image compression, you’d need a function that approximates the blur of objects in space across a vast distance.

Unfortunately, I’m a few million bucks shy of a good radio telescope.

0 Likes

#37

Edit: Duh, I feel stupid. Everything worked all along and the picture was just within an iterator. The correct output is in pred_img[0].

What would be the simplest way to run inference on the Superres GAN with new images after it has been trained? I have tried feeding a new image with the learn.predict method but the outcome doesn’t seem to make sense or I’m using it wrong. I’m unable to view the outcome as an image. Below is what I’ve tried so far:

Train learner

lr = 1e-4
bs,size=32, 128
switcher = partial(AdaptiveGANSwitcher, critic_thresh=0.65)
learn = GANLearner.from_learners(learn_gen, learn_crit, weights_gen=(1.,50.), show_img=True, switcher=switcher,
                                 opt_func=partial(optim.Adam, betas=(0.,0.99)), wd=wd)
learn.callback_fns.append(partial(GANDiscriminativeLR, mult_lr=5.))

learn.fit(4, lr/2)

epoch train_loss valid_loss gen_loss disc_loss time
0 1.525757 1.570713 04:49
1 1.394629 1.828694 04:51
2 1.385353 1.615942 04:50
3 1.397209 1.349061 04:51

Switch to generative mode

learn.gan_trainer.switch(gen_mode=True)

Open new image

infer_img = open_image('infer.png')
infer_img.shape

torch.Size([3, 224, 224])

Try inference but fail

pred_img = learn.predict(infer_img)
pred_img.shape

AttributeError: ‘tuple’ object has no attribute ‘shape’

Investigate the output

pred_img

Image (3, 128, 128),
tensor([[[1.0036, 1.0291, 1.0049, …, 1.0012, 1.0105, 0.9864],
[1.0247, 1.0235, 0.9945, …, 0.9945, 1.0167, 1.0068],
[1.0018, 1.0116, 1.0061, …, 0.9979, 0.9967, 0.9928],
…,

pred_img = fastai.vision.Image(pred_img)
pred_img.show()

AttributeError: ‘tuple’ object has no attribute ‘cpu’

Any help is appreciated!

0 Likes

(Kat) #38

Hey guys - I’ve got an interesting question beyond the exercise. I noticed that GANs require much more compute than let’s say CV or NLP. I’m curious, what are the most compute-hungry ML tasks? Would GANs be the top one? Followed by what?

Just being curious:)

0 Likes

(xnet) #39

Is there a way to “un-normalize” an image’s prediction (for the superresolution model).

Here’s what I have.

# load_img is my custom function using openCV that outputs a numpy array
img_ori_1 = load_img(tst_data.train_ds.x.items[i])
img_ori_2 = load_img(tst_data.train_ds.y.items[i])

# Make predictions
p, img_pred_1, b = learn.predict(tst_data.train_ds[i][0])
p, img_pred_2, b = learn.predict(tst_data.train_ds[i][1])

# Permute axis and convert to numpy
img_pred_1 = img_pred_1.permute(1, 2, 0).numpy()
img_pred_2 = img_pred_2.permute(1, 2, 0).numpy()

After this img_pred_1 values are normalized, and cannot be compared with my img_ori_1 values. How do I “un-imagenet-normalize” my predicted images? Thanks!

0 Likes

(Mai) #40

Hi,

In GANs , I am a little confused . When shall I use freeze instead of unfreeze ?

Thank you ,

0 Likes

(Ajay Arasanipalai) #41

That would depend on how big you want the network to be. For example, you could create an image classifier that has more parameters than a GAN.

0 Likes

(Abhimanyu) #43

When we implement a residual block using fast.ai’s res_block function, the input first goes through 2 ‘conv_layer’ and then the result is added with input. But, according to the picture in Kaiming He et.al paper input first goes through a ‘conv_layer’ then a Conv2d then the result is added to input, followed by relu and then Batchnorm. In short, the skip connection is done before applying activation. I wanted to ask if it matters? I tried both of them and there wasn’t much difference in results. 99.51% accuracy with ‘res_block’ and 99.52% with ‘resblock’ class that I wrote.
This is the resblock class I wrote:

0 Likes

(Abhimanyu) #44

Hi Everyone, I wanted to ask that suppose we train our decrapffier perfectly and it works good. If we feed it crappified image as input it outputs high-resolution image. What if we feed it back the high-resolution output from previous operation again. Will the output be even better than input or will it be the same?

0 Likes

(3stone) #45

Hi.
Does anyone know the difference between ImageList and ImageImageList?
In lesson7-superres-gan.ipynb, we use ImageImageList. When I change it to ImageList, it will turn to an error. There is no more info about ImageImageList in docs.

Thanks.

0 Likes