AttributeError: 'NoneType' object has no attribute 'c'
When I run learn = ConvLearner.pretrained(arch, data, precompute=True) on the dog breeds dataset, I get this error.
I’ve followed the instructions on the notebook, pretty much exactly, save for some difference in the way I’ve named some variables. I’ve already attempted to reset the kernel.
When starting the processing of the satellite data, we start out with sz=64 and get the data. Then on the next line the data is resized and stored in the tmp directory:
data = data.resize(int(sz*1.3), 'tmp')
My question is why is this done? Is it not possible to set sz to 83 in the first place? For the other sz values that follow (128 & 256) no additional resizing is done.
I also noticed that this line changed where the models are saved. They are now in 'data/planet/tmp/83/models'. I found out by looking at learn.models_path.
UPDATE
OK, so this question is answered in the next lesson. So next time I have a question, I’ll put it on hold until I’ve watched the next video. Jeremy also answers here: Planet Classification Challenge
May name i Sebastian and I’m new in this forum so I would like to say Hi to everybody at first. Hi!
I’m also new to fastIA and DL.
I have one simple question. Can I point test folder AFTER training? So before calling
learn.TTA()?
So I have trained my model and want to re use it on different objects (jpegs)?
I heard Jeremy saying we would be able to support images larger than 224. How does the model support bigger images other than 224 for pre-trained networks if it was trained on ImageNet data?
As a part of data augmentation, instead of cropping the larger image, why not resize it first to a smaller dimension and then crop an area out of it.
When we get data as CSV file which maps a filename to category…can we script to convert it into ImageFolder like dataset and group all same category images into different folder… Will it is different in that case.
What kind of data augmentation is best for rectangular images with a small height and much larger width.
When we are unfreezing the layers, does it mean we are training the model from scratch and not using any pre-computed ImageNet weights?
hello all, a petite silly doubt, when we are doing image augmentation, let’s suppose if we are contrasting the image or brightening the image the pixel values will also change for the images, then in that case the results will be different than expected right? so my question is how do we over come the above mentioned situation?
I am trying to export the model. using learn.export() but somehow export function for learner is not available. When i checked the APi using doc(learn), i see whole set of functions except export. Is there something wrong with my setup?
learn.export()
learn.export()
Typically, you normalise the inputs before running the model for training or testing. This normalization is done with the statistics of the pre-trained dataset (mostly ImageNet) when we use transfer learning.
So, if you plot these ‘normalized’ images they might look weird with some color aberrations. So, in order to print them as is, you need to denormalize it again.
Fastai takes care of normalization automatically that is why you don’t see inputting the ImageNet statistics yourself. But this is a standard practise. Hope this helps!