Wiki: Lesson 1


(Andrea) #137

@ecdrid
Yes, of course but at the end of the prediction i want to know inmediately which images were correctly or not correctly classified as i’m not sending this results to kaggle competition (i’m not using a kaggle dataset).


(ecdrid) #138

In such a case we then need to write something that will map the files with actual classes for us in a csv file for us to refer after the predictions were made?

Or lets play smart and move all your test images to validation set and those existing in validation to train and now update the test folder has a single image of different classes?

Or we can use the model to predict on single images…?

Isn’t correct though…


(Andrea) #139

I think this could be a solution but not the ideal one.

I think there is not a requirement of a minimum of images in test set and i could test with one for each class, i think but that would not be a solution neither.

i have been checking ImageClassifierData class and i dont see it is possible to do what i need and it’s quite odd because this is possible in keras.


(ecdrid) #140

It’s possible here also but we need to hard-code such stuff(not sure whether it’s already there)
It should be like checking all the files in a particular directory of a particular class and checking with what class our model has predicted that images to be in…(using os, glob etc)


(Andrea) #141

Yes, i think i is required to hard code it unless it is possible to do this with arrays or a cvs file…

Thanks for replying.


(Sandip) #142

Following this discussion with intent, because I too want to have funky images in the test folder and see how the model performs and realized that the current workflow does not probe the test dataset.

Found these threads discussing similar:

Looks like these solve part of the problem:

Here’s my status:
I am able to pass in the test data folder as

and direct the prediction to the test folder by


236 is the number of images in my test folder. What is the ‘2’?

I am able to look at the predictions, by doing:

After this, how do I make it show the image and the predicted class?


(ecdrid) #143

This might make it a bit easier if we want to do it ourselves…


(Andrea) #144

Thanks! this helped me a lot. I used your code as i were sending a submission and at least it maps the name of the file with the label assigned. With this, now i can check if testing is doing fine, i just have to code some more things to make it more ‘automatic’


(ecdrid) #145

You can share you work here…(or send a PR)
Thanks …

Also it isn’t my code…


(Andrea) #146

ups sorry, i know it is not your code, i just typed fast…

What i did was basically the same as @SlowLlama did, with the obvious changes to fit my dataset characteristics and in advance i apologize if my code is very basic, i just started with python.


(ecdrid) #147

Nice Code…


#148

I’m currently working on Lesson 1, trying out the model on my own dataset (35 images in the training set and 26 in validation, of stuffed animals and toy cars). I’ve tried lowering the batch size as mentioned by the other members here, but I still can’t get the learning rate v.s. iterations plot to show up.

I believe the batch size is set by modifying learn.data.bs (if that’s wrong please correct me).

In this post, it is mentioned that the number of iterations is equal to the training dataset size divided by the batch size, which in my case with a batch size of 1 should produce 35 points… and with the first 10 and last 5 cut out should still have 20 points left to plot.

And help is appreciated.


#149

Just having some fun with Santa vs. Jack Skellington… I’m not sure how so many kids were confused in the movie. My CNN seems to make everything clear… :wink:

CloudApp

Here’s a link to the dataset as per people’s request. It’s really light, so feel free to add: http://bit.ly/2o4Sgjh


#150

I have been trying to run lesson1.ipynb, but I am facing difficulties with importing the libraries.

https://i.imgur.com/wpMJDd4.png

I know that this question has been answered with ‘use python 3.6’, but I am running this in a conda environment with python 3.6 already.

Any thoughts on what’s going wrong here?


(Oren Dar) #151

Both the error description and the stacktrace seem to indicate Python 3.5 - have you tried updating your Python version and re-running?


(Phong) #152

Are you still keeping the dataset somewhere else ? Can you please share your dataset to everyone ?


#153

Yes, I updated Python to 3.6.4.

I checked the version of Python in my conda environment with

python --version

Also, I ran the following piece of code with the python command-line interpreter within my conda environment

name = "fast.ai"
print(f{name})

The output was fast.ai, which is only possible with Python 3.6.

The problem is unique to my Jupyter notebook. I am using Python 3 to run my Jupyter notebook. I even tried the following within my Jupyter notebook:

!python --version

and the output was Python 3.6.4, so I am not sure what the problem is.


#154

Hi, don’t worry. The problem appears to have solved itself somehow. :smiley:


(Florian Peter) #155

Same problem here, on a larger dataset.

Running out of 32GB RAM and crashing, even with num_workers=1:


(why) #156

reduce batch size to 16 or try to decrease the image size