Wiki: Lesson 1

Yes, of course but at the end of the prediction i want to know inmediately which images were correctly or not correctly classified as i’m not sending this results to kaggle competition (i’m not using a kaggle dataset).

In such a case we then need to write something that will map the files with actual classes for us in a csv file for us to refer after the predictions were made?

Or lets play smart and move all your test images to validation set and those existing in validation to train and now update the test folder has a single image of different classes?

Or we can use the model to predict on single images…?

Isn’t correct though…

I think this could be a solution but not the ideal one.

I think there is not a requirement of a minimum of images in test set and i could test with one for each class, i think but that would not be a solution neither.

i have been checking ImageClassifierData class and i dont see it is possible to do what i need and it’s quite odd because this is possible in keras.

It’s possible here also but we need to hard-code such stuff(not sure whether it’s already there)
It should be like checking all the files in a particular directory of a particular class and checking with what class our model has predicted that images to be in…(using os, glob etc)

Yes, i think i is required to hard code it unless it is possible to do this with arrays or a cvs file…

Thanks for replying.

Following this discussion with intent, because I too want to have funky images in the test folder and see how the model performs and realized that the current workflow does not probe the test dataset.

Found these threads discussing similar:

Looks like these solve part of the problem:

Here’s my status:
I am able to pass in the test data folder as

and direct the prediction to the test folder by

236 is the number of images in my test folder. What is the ‘2’?

I am able to look at the predictions, by doing:

After this, how do I make it show the image and the predicted class?

This might make it a bit easier if we want to do it ourselves…

Thanks! this helped me a lot. I used your code as i were sending a submission and at least it maps the name of the file with the label assigned. With this, now i can check if testing is doing fine, i just have to code some more things to make it more ‘automatic’

You can share you work here…(or send a PR)
Thanks …

Also it isn’t my code…

ups sorry, i know it is not your code, i just typed fast…

What i did was basically the same as @SlowLlama did, with the obvious changes to fit my dataset characteristics and in advance i apologize if my code is very basic, i just started with python.


Nice Code…

Just having some fun with Santa vs. Jack Skellington… I’m not sure how so many kids were confused in the movie. My CNN seems to make everything clear… :wink:


Here’s a link to the dataset as per people’s request. It’s really light, so feel free to add:


I have been trying to run lesson1.ipynb, but I am facing difficulties with importing the libraries.

I know that this question has been answered with ‘use python 3.6’, but I am running this in a conda environment with python 3.6 already.

Any thoughts on what’s going wrong here?

Both the error description and the stacktrace seem to indicate Python 3.5 - have you tried updating your Python version and re-running?

Are you still keeping the dataset somewhere else ? Can you please share your dataset to everyone ?

Yes, I updated Python to 3.6.4.

I checked the version of Python in my conda environment with

python --version

Also, I ran the following piece of code with the python command-line interpreter within my conda environment

name = ""

The output was, which is only possible with Python 3.6.

The problem is unique to my Jupyter notebook. I am using Python 3 to run my Jupyter notebook. I even tried the following within my Jupyter notebook:

!python --version

and the output was Python 3.6.4, so I am not sure what the problem is.

Hi, don’t worry. The problem appears to have solved itself somehow. :smiley:

Same problem here, on a larger dataset.

Running out of 32GB RAM and crashing, even with num_workers=1:

reduce batch size to 16 or try to decrease the image size

Looks like you have messed up some of python files.Try #git pull