Lesson 7 in-class chat ✅

thanks for the reply, but that does not work with my current server configuration. I think it has to do with the pytorch version which was recently released and I didnt want to make mass changes to the configuration before the last lecture.
Don’t fret, I plan on rebuilding the server over the winter break and going through the all the notebooks, lectures and such again.
I will start a new thread in the forums should anything arise.

Here is the link for A guide to convolution arithmetic for deep learning

15 Likes

Thanks

is cross connection same as skip connection ?

2 Likes

Why do you concat before calling conv2(conv1(x)), not after?

8 Likes

yes as per paper we deconv(merged with skip features) followed by convs

When the concatenation occurs in a densenet or Unet, where does the concatenated information get added? Is it an additional layer, or is it attached to the side, increasing the width of the image, as it looks like in the picture?

5 Likes

For what kind of problems is unet NOT a good idea?

8 Likes

Why were the batchnorm layers params being set to trainable?
Pardon me if it’s already answered:)

In particular I’d be interested to know if anyone know how good is unet for object detection

1 Like

what is Pixelshuffle_ICNR in U-Net?

1 Like

what purpose does crapification serve ??

Create damaged data for the purpose of learning how to fix them

We want to create a model that transforms low-resolution, possibly obstructed images into high-res versions. We need to create our own training set for this (since there isn’t an existing well-known one for this task)

1 Like

Could we use this for de-interlacing video? temporally different half frames?

would we have to merge both halves of the interlaced pair as one image for the “bad” input?

3 Likes

I searched this thread before asking this.

I couldn’t understand what skip connection or identity connection does while upsampling. Does it “only add pixel channel” taken from the similar layer grid size (on downsampling layer)?

what is bigGAN?

Are there any compression algorithms using deep learning? It seems like it can well outperform JPEG

3 Likes

​in fit one cycle we use p
ct start=0.8

does it means in every cycle it would start with 0.8 of peak of LR in cycle

Why we always train model first by just freezing layers and then unfreeze it and train it gain. Why we are not just unfreezing and training it directly?