Wiki: Lesson 2

I didn’t understand that how we can train with different input sizes. On each tutorial people resizes their images into AxA square matrix with same size and same aspect ratio. So the input size of network will be same which is AxA. How can our model adapt itself with different input sizes?

for i in imgs[0:5]:
  img1 = PIL.Image.open('tmp/340/train/'+i); 
  print(img1.size)

(424, 340)
(510, 340)
(453, 340)
(453, 340)
(489, 340)

Why don’t we train the model with precompute= False the very first time? What is the point with running it with True and then running it with False?.

Edit: I found the answer.

" The only reason to have precompute=True is it is much faster (10 or more times). If you are working with quite a large dataset, it can save quite a bit of time. There is no accuracy reason ever to use precompute=True"

In this code snippet, you’re printing the size of the raw dataset. We will resize the images into AxA square matrix to use in the model.

We resize images in the following way.

  1. We have initialized a variable called ‘sz’ in the notebook, which is the size that the images will be resized.
    resize

  2. ‘ImageClassifierData’ class from the fastai library will be used to create the data with an appropriate size. This class will resize the images with the specified size.

  3. In this example, the variable ‘data’ holds the dataset which will be used while training, where each image is of size 224x224.

Thanks for your answer
But,

  1. It is not raw data. We have resized heighs of the images into 340px in get_data function.


    Why did we do it? Why did we resized only heighs?

  2. Why did we built a temp directory?

I ran into the same error. Then figured out that if you are on paperspace DON’t run this cell. Instead, basically use one of the other methods - I used Kaggle cli (search for Kaggle cli on the deep learning website) and follow instructions to set it up - accept Kaggle competition and then use kaggle cli to download the necessary files to paperspace.

Then proceed with the rest of the nb. Just set up a /tmp folder under planet along with /models.

Believe that if you use crestle you need to execute this cell/set of commands.

1 Like

Really, As a begginer I enjoyed a lot. I trust the naming originates from the possibility that a few pictures you would catch from the side (like taking a photograph of a feline or puppy) versus some you bring top-down, like satellite pictures, or sustenance photographs on instagram. In the side-on case, reasonable information enlargements would flip on a level plane, aside from in the incidental instance of the sidewise or topsy turvy hanging feline/puppy . In best down imaging like with satellites, you can turn and flip the picture toward each path and it could in any case resemble a conceivable preparing picture.

Thanks&Regards
Katherine

AttributeError: 'NoneType' object has no attribute 'c'

When I run learn = ConvLearner.pretrained(arch, data, precompute=True) on the dog breeds dataset, I get this error.

I’ve followed the instructions on the notebook, pretty much exactly, save for some difference in the way I’ve named some variables. I’ve already attempted to reset the kernel.

Any idea on what I should try?

edit: also attempted a git pull, no luck.

When starting the processing of the satellite data, we start out with sz=64 and get the data. Then on the next line the data is resized and stored in the tmp directory:

data = data.resize(int(sz*1.3), 'tmp')

My question is why is this done? Is it not possible to set sz to 83 in the first place? For the other sz values that follow (128 & 256) no additional resizing is done.

I also noticed that this line changed where the models are saved. They are now in 'data/planet/tmp/83/models'. I found out by looking at learn.models_path.

UPDATE
OK, so this question is answered in the next lesson. So next time I have a question, I’ll put it on hold until I’ve watched the next video. Jeremy also answers here: Planet Classification Challenge

DId you ever figure out how to get 7zip installed on Gradient? I have run into this same issue.

There may be a problem with the file path. You can see mixture of ‘\’ and ‘/’ in the path shown in the error message.

I am finding I need to edit notebook code in a few places to work with Windows file paths.

Actually perhaps more likely a need to download weights

@jeremy How does the model handle variable input sizes as mentioned in the video? Aren’t there any fully connected layers to do the classification?

Hello,

May name i Sebastian and I’m new in this forum so I would like to say Hi to everybody at first. Hi!
I’m also new to fastIA and DL.
I have one simple question. Can I point test folder AFTER training? So before calling
learn.TTA()?
So I have trained my model and want to re use it on different objects (jpegs)?

I have some doubts regarding this chapter:

  1. I heard Jeremy saying we would be able to support images larger than 224. How does the model support bigger images other than 224 for pre-trained networks if it was trained on ImageNet data?
  2. As a part of data augmentation, instead of cropping the larger image, why not resize it first to a smaller dimension and then crop an area out of it.
  3. When we get data as CSV file which maps a filename to category…can we script to convert it into ImageFolder like dataset and group all same category images into different folder… Will it is different in that case.
  4. What kind of data augmentation is best for rectangular images with a small height and much larger width.
  5. When we are unfreezing the layers, does it mean we are training the model from scratch and not using any pre-computed ImageNet weights?

When using create_cnn, is precompute = False the same thing as pretrained = False?

hello all, a petite silly doubt, when we are doing image augmentation, let’s suppose if we are contrasting the image or brightening the image the pixel values will also change for the images, then in that case the results will be different than expected right? so my question is how do we over come the above mentioned situation?

How should I go about processing images if i only have tif images(50 - 200 Mb each) available?

I am trying to export the model. using learn.export() but somehow export function for learner is not available. When i checked the APi using doc(learn), i see whole set of functions except export. Is there something wrong with my setup?
learn.export()
learn.export()

AttributeError Traceback (most recent call last)
in ()
----> 1 learn.export()

AttributeError: ‘Learner’ object has no attribute ‘export’

Any news on the AttributeError: 'Learner' object has no attribute 'export? I have the same issue. Updated everything per the instructions as well.

Hi
what does this line of code do ?

return data.trn_ds.denorm(x)[2]