Lesson 6 In-Class Discussion ✅

Data augmentation can help fill the gaps in your image database. For example, if your database has only clean images in a certain orientation, data augmentation can help your network learn to classify noisy images or distorted images, or rotated images.

2 Likes

For making custom models, is it a good choice to switch to Pytorch?

I’ve seen discussions on Kaggle where people use translation for data augmentation for text. Say you’re doing a text classification task in English, you can use Google Translate to do English -> Spanish (or another language) -> back to English and get augmented text data.

7 Likes

Sometimes, I train a network which ends up predicting always the same class. Got a hard time finding out why, but it seems that it’s more or less fixed by reducing dropout.
Is it because by using the default dropout (ps=0.5), the network is not complex enough to properly use input features for my case? Or is it because of other reasons, i.e. strong classes imbalance (which is the case)
I’m trying to get some intuition about that. Has anyone already experienced this kind of issues?

1 Like

What do you mean by “switch”? The models that fastai uses are Pytorch models.

Here is the info and dates for part 2: https://www.usfca.edu/data-institute/certificates/deep-learning-part-two

14 Likes

Nice,
I accidentally discovered this idea from here:

It is 9 months old !!

5 Likes

will there be an international setup like for part v3 part 1

11 Likes

If we are unable to attend part 2 in person, can we get the certification by participating in the live?

4 Likes

I personally tried this in one of the project and I can confirm translation augmentation works. (At least it worked in my client’s use-case) @cedric

5 Likes

I think only those attending in person get the certificate :frowning:

the heat-map on the cat’s face - does it indicate pixel intensities (as in this gray-scale image) or does it somehow indicate the most prominent features or something?

Please wait for Jeremy to go over it before asking questions :slight_smile:

3 Likes

I understand that … my concern is for those commuting to the class , the benefit of in person isn’t much greater that simply participating in the forums online since the in person class size is now so large.

5 Likes

Can someone confirm this reasoning of dropout vs L2 regularization

L2 regularization affects all the parameters in all the activations per epoch, whereas dropout affects some of the parameter per epoch. Which means if we are too aggressive with L2 we may need less epochs, but that may not be the case for dropout. For dropout to work we have to have a bigger epoch number for it to work.

Bottomline dropout can be more effective for longer epoch, whereas L2 can be more effective for shorter epochs.

I think you are referring to this Kaggle augmentation for text discussion?

2 Likes

Convolution in GIMP docs: https://docs.gimp.org/2.8/en/plug-in-convmatrix.html

1 Like

Have you able to use fastai on this one? I tried the Kaggle kernel but didn’t get it to work, plus they are using fastai 0.7

We will announce more details about part 2 later, but all the core aspects will remain the same: we will offer some form of remote access (as we have for all sessions since we began) and the certificates will only be available for the in-person course (since the certificates are through the Data Institute).

12 Likes

sorry, my 7 min went longer and I thought I missed it