Wiki: Lesson 1

Thanks for the link… Had to update it to work with Python 3.6 as the urllib2 has been deprecated i think. Submitted an update on the github too

Here is an interesting use of image recognition to fight corruption in the extractive industries. This webinar is being shared by DataKind who helps non-profit make use of their data for good.

http://www.datakind.org/blog/webinar-fighting-corruption-in-the-extractives-industry

if you want an opportunity to use the skills you are gaining in this course see the folks at DataKind.

9 Likes

I just finished building a deep learning PC earlier last month. I followed the general instructions from the last class version to install cuda 8 and cudann 6. I see in the startup script for class version 2 that it’s using cuda 9 and cudann 7. Will this course run with the old 8/6, or will I need to upgrade to 9/7?

Thanks!

1 Like

Should work fine with the older versions, but some architectures will be far slower.

1 Like

Thanks Cedric, solved my issue

Hi All,

I found this useful tool to download images from Google Images

I am doing simple human race image classifier, I have created a folder called ~/data/people_original and have three folders in there called caucasian, african and asian and populated each folder using the command

google-images-download download 'african man' 'african woman' --keywords '' --download-limit 100

from within each folder.

So now I have three folders of ~/data/people_original/asian, ~/data/people_original/african, ~/data/people_original/cucasian, each with 200 images in them.

I was wondering if anyone has any munging code that could be repurposed for moving these splitting these images into the required folder structure that is present in the dogscats folder i.e. models sample test1 train valid.

I am guessing that this will be the way that most people will attempt to do the homework from lesson 1 and so figured this might be a useful snippet/recommended way of doing something that someone might have already done.

Or perhaps is in (or could be in) the FastAI library.

Kind regards,

Luke Byrne

10 Likes

Thanks for this amazing deep learning course.
I just shifted from v1 to v2.
Where can I access the v2 .ipynb notebooks?
Are there any setup instructions for Mac and PC for v2 of the course (conda yaml file)?
The main fast.ai site only seems to contain v1 content.
I use a Mac for reviewing the notebooks with sample data,
and a PC with a GPU for more compute intensive tasks.

1 Like

cpgrant theV2 video link is at the top of this post.

here is is again - https://www.youtube.com/watch?v=IPBSB1HLNLo&feature=youtu.be

Hey Luke,
Try modifying this for your purposes – @rodjun created this script for the dogs vs. cats competition, but the idea is the same for what you’re doing:

Best of luck!
Maureen

1 Like

Hi.
I watched some of the www.datakind.org webinars and are pretty interesting. Thanks for posting it here.
I signed up as volunteer some time ago but have not been able to contribute so far.
What is your oppinion on DataKind and their projects?
Do you know of other similar initiatives?
Best!

1 Like

Ok… tried it with around 175 images each of white tiger and zebras downloaded from Google images with 150 train set and validation set.

First a few questions:

  1. What does different epoch means? Does the system build the layers from scratch in each epoch or is it that the next epoch is building upon layers of previous epoch?
  2. I am used to scikit train_test_split with random. Here we do the batch sizing. Is data within batches randomized every time as each time I run an epoch, the loss rates and accuracy is slightly different?
  3. I used a batch size of 15. Still my learning rate schedule doesn’t work
  4. This is what it does when I run data augmentation. Is it because the images are high definition. I have gone through the images they arent that heavy

Thanks for the help

1 Like

For their bigger / longer term projects I have not been able to connect with them.
http://www.datakind.org/do-good-with-data
But I have participated in two of their “data dives”.
http://www.datakind.org/datadives
These are basically a weekend-long data focused hackathons.

In NYC they get 50-100 really amazing folks from major financial, Internet organizations to come together to work on interesting data sets with interesting government and non-profit organizations. I’ve participated in two of these events. I have found these very useful personally. And I feel like I have helped do some good for the non-profit organizations.

The Taproot foundation also provides pro bono opportunities. Not specifically related to data science or machine learning. However, I can imagine that they would find folks with skills in these areas useful.

https://www.taprootfoundation.org/

3 Likes

Not sure whether this helps or not…

  • The neural net is basically trying to improve its weights for better performance based on your metrics… A epoch is basically a number of times your net has seen the whole data…

  • Not sure about randomisation of the dataset but the data is splitted according to your batch size…It’s like how much images in one go is seen by the net…Just like a Pipeline simultaneously…

Couldn’t recognise the last plots?(it seems that the images are zoomed in to get pixels?)

Just one question…

Does the order of images in the dataset matters?(provided they have equal distribution in terms of counts…)

1 Like

Thanks @tgb417 for your reply. I’ll check it

What is the difference between the two variables sz and bs? bs I understand is batch size. What is sz and how does it affect the model? How does bs affect the model?

1 Like

Image sizes…

1 Like

sz determines the dimensions (height x width) of your input image. Having a smaller image helps speed up the training process. The number of Convolution operations are significatly reduced with a smaller input image sizes. Since most of the network is Conv Layers you could see significant boost in training performance. In the course, Jeremy suggests to start with a small sz parameter, train quickly to a reasonable weights value then increase the sz parameter (as powers of 2) until the original dimensions of the image.

Batch bs parameter specifies the number of images considered in each iteration (mini-batch run). You want as large as a batch size as possible so that gradient updates are more accurate. So the rule of thumb many follow is to have as large of a batch size as it will fit in GPU Memory. Having said that, having a smaller bs is not a bad approach so that updates happen more often and many times in one epoch so there may be chance to train faster.

7 Likes

I am running lesson1 notebook with Crestle server.

And I have enabled GPU.

I just clone the fastai repo, and run the notebook directly.

But it take too much time to fit the model (almost 30 minutes).

What’s wrong with my operation?

Need I setup or config something to make GPU work?

It’s takes around 30 -35 mins…
Try playing around with bs
Or nvidia-smi to see whether we are using the GPU properly…

One last question… when reduce sz, does it compress the size of the image in the batch/epoch or is it taking only those images that fit the sz dimensions. Thanks for your help so far