Wiki: Lesson 2

AttributeError: 'NoneType' object has no attribute 'c'

When I run learn = ConvLearner.pretrained(arch, data, precompute=True) on the dog breeds dataset, I get this error.

I’ve followed the instructions on the notebook, pretty much exactly, save for some difference in the way I’ve named some variables. I’ve already attempted to reset the kernel.

Any idea on what I should try?

edit: also attempted a git pull, no luck.

When starting the processing of the satellite data, we start out with sz=64 and get the data. Then on the next line the data is resized and stored in the tmp directory:

data = data.resize(int(sz*1.3), 'tmp')

My question is why is this done? Is it not possible to set sz to 83 in the first place? For the other sz values that follow (128 & 256) no additional resizing is done.

I also noticed that this line changed where the models are saved. They are now in 'data/planet/tmp/83/models'. I found out by looking at learn.models_path.

UPDATE
OK, so this question is answered in the next lesson. So next time I have a question, I’ll put it on hold until I’ve watched the next video. Jeremy also answers here: Planet Classification Challenge

DId you ever figure out how to get 7zip installed on Gradient? I have run into this same issue.

There may be a problem with the file path. You can see mixture of ‘\’ and ‘/’ in the path shown in the error message.

I am finding I need to edit notebook code in a few places to work with Windows file paths.

Actually perhaps more likely a need to download weights

@jeremy How does the model handle variable input sizes as mentioned in the video? Aren’t there any fully connected layers to do the classification?

Hello,

May name i Sebastian and I’m new in this forum so I would like to say Hi to everybody at first. Hi!
I’m also new to fastIA and DL.
I have one simple question. Can I point test folder AFTER training? So before calling
learn.TTA()?
So I have trained my model and want to re use it on different objects (jpegs)?

I have some doubts regarding this chapter:

  1. I heard Jeremy saying we would be able to support images larger than 224. How does the model support bigger images other than 224 for pre-trained networks if it was trained on ImageNet data?
  2. As a part of data augmentation, instead of cropping the larger image, why not resize it first to a smaller dimension and then crop an area out of it.
  3. When we get data as CSV file which maps a filename to category…can we script to convert it into ImageFolder like dataset and group all same category images into different folder… Will it is different in that case.
  4. What kind of data augmentation is best for rectangular images with a small height and much larger width.
  5. When we are unfreezing the layers, does it mean we are training the model from scratch and not using any pre-computed ImageNet weights?

When using create_cnn, is precompute = False the same thing as pretrained = False?

hello all, a petite silly doubt, when we are doing image augmentation, let’s suppose if we are contrasting the image or brightening the image the pixel values will also change for the images, then in that case the results will be different than expected right? so my question is how do we over come the above mentioned situation?

How should I go about processing images if i only have tif images(50 - 200 Mb each) available?

I am trying to export the model. using learn.export() but somehow export function for learner is not available. When i checked the APi using doc(learn), i see whole set of functions except export. Is there something wrong with my setup?
learn.export()
learn.export()

AttributeError Traceback (most recent call last)
in ()
----> 1 learn.export()

AttributeError: ‘Learner’ object has no attribute ‘export’

Any news on the AttributeError: 'Learner' object has no attribute 'export? I have the same issue. Updated everything per the instructions as well.

Hi
what does this line of code do ?

return data.trn_ds.denorm(x)[2]