Hello guys,
in the draft:
fastbook/ 07_sizing_and_tta.ipynb
it is mentioned that normalization becomes especially important when using pretrained models hence the stats that were used for training the model must be applied to our “new” data, too.
It is also mentioned that, when using the cnn_learner we do not have to handle normalization because the fastai library automatically adds the proper transform. (usually coming from the ImageNet dataset).
This is where my confusion starts:
Is this only true for the cnn_learner or let’s say for the unet_learner, too?
From the explanation above I would assume that the same counts for the unet_learner.
However in Lesson 3 Jeremy applied a normalization to his unet_learner:
camvid = DataBlock(blocks=(ImageBlock, MaskBlock(codes)),
get_items=get_image_files,
splitter=FileSplitter(path/‘valid.txt’),
get_y=lambda o: path/‘labels’/f’{o.stem}_P{o.suffix}’,
batch_tfms=[*aug_transforms(size=(360,480)), Normalize.from_stats(*imagenet_stats)])
2.
dls = camvid.dataloaders(path/“images”, bs=8, path=path)
3.
learn = unet_learner(dls, resnet34, metrics=metrics)
I thaught that when using pretrained models, fastai takes the stats from a batch.
Could someone please explain why Jeremy used Normalization on a pretrained Model more cleary? I this not automatically done by fastai?
Best regards