Wiki: Lesson 1


(Andrea) #144

Thanks! this helped me a lot. I used your code as i were sending a submission and at least it maps the name of the file with the label assigned. With this, now i can check if testing is doing fine, i just have to code some more things to make it more ‘automatic’


(Aditya) #145

You can share you work here…(or send a PR)
Thanks …

Also it isn’t my code…


(Andrea) #146

ups sorry, i know it is not your code, i just typed fast…

What i did was basically the same as @SlowLlama did, with the obvious changes to fit my dataset characteristics and in advance i apologize if my code is very basic, i just started with python.


(Aditya) #147

Nice Code…


#148

I’m currently working on Lesson 1, trying out the model on my own dataset (35 images in the training set and 26 in validation, of stuffed animals and toy cars). I’ve tried lowering the batch size as mentioned by the other members here, but I still can’t get the learning rate v.s. iterations plot to show up.

I believe the batch size is set by modifying learn.data.bs (if that’s wrong please correct me).

In this post, it is mentioned that the number of iterations is equal to the training dataset size divided by the batch size, which in my case with a batch size of 1 should produce 35 points… and with the first 10 and last 5 cut out should still have 20 points left to plot.

And help is appreciated.


#149

Just having some fun with Santa vs. Jack Skellington… I’m not sure how so many kids were confused in the movie. My CNN seems to make everything clear… :wink:

CloudApp

Here’s a link to the dataset as per people’s request. It’s really light, so feel free to add: http://bit.ly/2o4Sgjh


#150

I have been trying to run lesson1.ipynb, but I am facing difficulties with importing the libraries.

https://i.imgur.com/wpMJDd4.png

I know that this question has been answered with ‘use python 3.6’, but I am running this in a conda environment with python 3.6 already.

Any thoughts on what’s going wrong here?


(Oren Dar) #151

Both the error description and the stacktrace seem to indicate Python 3.5 - have you tried updating your Python version and re-running?


(Phong) #152

Are you still keeping the dataset somewhere else ? Can you please share your dataset to everyone ?


#153

Yes, I updated Python to 3.6.4.

I checked the version of Python in my conda environment with

python --version

Also, I ran the following piece of code with the python command-line interpreter within my conda environment

name = "fast.ai"
print(f{name})

The output was fast.ai, which is only possible with Python 3.6.

The problem is unique to my Jupyter notebook. I am using Python 3 to run my Jupyter notebook. I even tried the following within my Jupyter notebook:

!python --version

and the output was Python 3.6.4, so I am not sure what the problem is.


#154

Hi, don’t worry. The problem appears to have solved itself somehow. :smiley:


(Florian Peter) #155

Same problem here, on a larger dataset.

Running out of 32GB RAM and crashing, even with num_workers=1:


(why) #156

reduce batch size to 16 or try to decrease the image size


(why) #157

Looks like you have messed up some of python files.Try #git pull


(why) #158

Yes you can setup spot instance also to do the course and burn the aws rigs for deep learning.Just Joking.

Read this excellent guide to setup fast.ai spot aws ami.
http://wiki.fast.ai/index.php/AWS_Spot_instances


#159

I’m not sure if I’m the only one experiencing this, but when you go up to sign up for PaperSpace, if you select the East Coast, the FastAI environment is not available. You have to send something in to the team to request it to be enable.

So far, it’s been two days and no word.


(Reshama Shaikh) #160

@dillon Should I update the Paperspace instructions with this added step? Or are you getting so much traffic volume, that this is causing a delay.


(Kannan Pattu) #161

I got the same issue, I just logged off and log back in and selected the fast.ai template and its working like a charm


#163

Didn’t work for me. I can understand for sure wanting some kind of manual check. Could be for any number of reasons, including traffic, bots, etc.

I also tried in multiple browsers. No dice.

The multi-day wait for a response is kind of frustrating though. If I have to do a cross-country build, will I see really poor pings?


(Anders) #164

I am also getting “Cannot take a larger sample than population when ‘replace=False’” as well. I’m training on a data set with 40 validation, 267 trainin, 101 training photos.