Then, what is the correct way to normalize test dataset ? Is it by using the stats computed on a single batch of the training dataset ? Moreover, if the batch size is small due to the large size of images, is it okay if the computed stats are not representative of the whole training set ?
From fastai docs: https://docs.fast.ai/layers.html#PixelShuffle_ICNR, the way I understand it is, this is a PyTorch module for upsampling by 2 times (scale
) from a sequence of 2D convolutional using PixelShuffle
, ICNR initialization and optional weight normalization.
-
PixelShuffle
:- A PyTorch vision layer that is useful for implementing efficient sub-pixel convolution.
- What is PixelShuffle? If you are interested, look at the paper: Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network by Shi et. al (I remember that I learned about this in part 2 v2)
-
icnr
initializes a weight matrix with ICNR. ICNR is sub-pixel convolution initialized to convolution nearest-neighbor (NN) resize
That’s right. We’ll be learning about it in part 2. It’s like a deconvolutional layer, or interpolation followed by convolution, by works a bit better.
When do we use an ImageItemList as opposed to using an ImageDataBunch?
I have the same query too
ImageItemList
is part of the data blocks API. It’s more flexible. ImageDataBunch.create
is a convenient wrapper for common use cases.
Layer (type) Output Shape Param # Trainable
Conv2d [128, 8, 14, 14] 80 True
What is the meaning of first number of the shape (128)? Thanks.
That’s your batch size.
I’ve updated the fastai version, yet still got this error, any idea how to fix it? thanks
NameError: name ‘res_block’ is not defined
searched the code, the function seems is defined in layers.py
def res_block(nf, dense:bool=False, norm_type:Optional[NormType]=NormType.Batch, bottle:bool=False, **kwargs):
I added the following import, does not work, did I miss something?
from fastai.layers import *
it needs the latest dev version of fastai.
In GCP, when I do update (everytime I start it up)
sudo /opt/anaconda3/bin/conda install -c fastai fastai
=> it just update to version 34
I download the latest dev version from git, and run
sudo /opt/anaconda3/bin/pip install -e .[dev]
=> update to latest dev version
i can run the res_block function nicely.
I found the exact same thing; I’m not sure what’s wrong with the setup on GCP.
You are right there is some problem. Many people are facing the same issue. It’s not getting above 1.0.34.
This worked for me on gcp
res_block is not working for me. It says res_block is not defined. I am on GCP and have updated conda as well as fastai. Is anybody else facing the same
In the lesson it was pointed out that when using Adam optimizer for GAN training we should 0
the momentum (first value in betas
). Why isn’t momentum good for GAN training?
start in the lesson material in the threads, most some posted there?
I can’t seem to wrap my head around why data.c is 3 here, this implies that there are 3 classes right? Though I cannot seem to understand why there are 3 classes, isn’t there only 1 type of target, i.e. the high res image?
Hi. I am trying to run the lesson7-human-numbers.ipynb notebook but encounter this error at
Same thing with a loop section at learn.fit_one_cycle(6, 1e-4)
~/anaconda3/lib/python3.6/site-packages/torch/nn/functional.py in batch_norm(input, running_mean, running_var, weight, bias, training, momentum, eps)
1344 size = list(input.size())
1345 if reduce(mul, size[2:], size[0]) == 1:
-> 1346 raise ValueError('Expected more than 1 value per channel when training, got input size {}'.format(size))
1347 return torch.batch_norm(
1348 input, weight, bias, running_mean, running_var,
ValueError: Expected more than 1 value per channel when training, got input size [1, 64]
I have tried the code with different versions of fastai: 1.0.38 , 1.0.37 and the latest developer version but no succeed.
I ran the code in order but I think there are nothing wrong with this. I am appreciated if someone can help me to solve the problem. Thank you in advance
Not quite. c
is the number of output activations. So… how many channels are in our target?
That’s fixed in the new 1.0.39 release. Many thanks for reporting it.