I didn’t understand that how we can train with different input sizes. On each tutorial people resizes their images into AxA square matrix with same size and same aspect ratio. So the input size of network will be same which is AxA. How can our model adapt itself with different input sizes?
for i in imgs[0:5]:
img1 = PIL.Image.open('tmp/340/train/'+i);
print(img1.size)
Why don’t we train the model with precompute= False the very first time? What is the point with running it with True and then running it with False?.
Edit: I found the answer.
" The only reason to have precompute=True is it is much faster (10 or more times). If you are working with quite a large dataset, it can save quite a bit of time. There is no accuracy reason ever to use precompute=True"
In this code snippet, you’re printing the size of the raw dataset. We will resize the images into AxA square matrix to use in the model.
We resize images in the following way.
We have initialized a variable called ‘sz’ in the notebook, which is the size that the images will be resized.
‘ImageClassifierData’ class from the fastai library will be used to create the data with an appropriate size. This class will resize the images with the specified size.
I ran into the same error. Then figured out that if you are on paperspace DON’t run this cell. Instead, basically use one of the other methods - I used Kaggle cli (search for Kaggle cli on the deep learning website) and follow instructions to set it up - accept Kaggle competition and then use kaggle cli to download the necessary files to paperspace.
Then proceed with the rest of the nb. Just set up a /tmp folder under planet along with /models.
Believe that if you use crestle you need to execute this cell/set of commands.
Really, As a begginer I enjoyed a lot. I trust the naming originates from the possibility that a few pictures you would catch from the side (like taking a photograph of a feline or puppy) versus some you bring top-down, like satellite pictures, or sustenance photographs on instagram. In the side-on case, reasonable information enlargements would flip on a level plane, aside from in the incidental instance of the sidewise or topsy turvy hanging feline/puppy . In best down imaging like with satellites, you can turn and flip the picture toward each path and it could in any case resemble a conceivable preparing picture.
AttributeError: 'NoneType' object has no attribute 'c'
When I run learn = ConvLearner.pretrained(arch, data, precompute=True) on the dog breeds dataset, I get this error.
I’ve followed the instructions on the notebook, pretty much exactly, save for some difference in the way I’ve named some variables. I’ve already attempted to reset the kernel.
When starting the processing of the satellite data, we start out with sz=64 and get the data. Then on the next line the data is resized and stored in the tmp directory:
data = data.resize(int(sz*1.3), 'tmp')
My question is why is this done? Is it not possible to set sz to 83 in the first place? For the other sz values that follow (128 & 256) no additional resizing is done.
I also noticed that this line changed where the models are saved. They are now in 'data/planet/tmp/83/models'. I found out by looking at learn.models_path.
UPDATE
OK, so this question is answered in the next lesson. So next time I have a question, I’ll put it on hold until I’ve watched the next video. Jeremy also answers here: Planet Classification Challenge
May name i Sebastian and I’m new in this forum so I would like to say Hi to everybody at first. Hi!
I’m also new to fastIA and DL.
I have one simple question. Can I point test folder AFTER training? So before calling
learn.TTA()?
So I have trained my model and want to re use it on different objects (jpegs)?
I heard Jeremy saying we would be able to support images larger than 224. How does the model support bigger images other than 224 for pre-trained networks if it was trained on ImageNet data?
As a part of data augmentation, instead of cropping the larger image, why not resize it first to a smaller dimension and then crop an area out of it.
When we get data as CSV file which maps a filename to category…can we script to convert it into ImageFolder like dataset and group all same category images into different folder… Will it is different in that case.
What kind of data augmentation is best for rectangular images with a small height and much larger width.
When we are unfreezing the layers, does it mean we are training the model from scratch and not using any pre-computed ImageNet weights?
hello all, a petite silly doubt, when we are doing image augmentation, let’s suppose if we are contrasting the image or brightening the image the pixel values will also change for the images, then in that case the results will be different than expected right? so my question is how do we over come the above mentioned situation?
I am trying to export the model. using learn.export() but somehow export function for learner is not available. When i checked the APi using doc(learn), i see whole set of functions except export. Is there something wrong with my setup?
learn.export()
learn.export()