Data augmentation can help fill the gaps in your image database. For example, if your database has only clean images in a certain orientation, data augmentation can help your network learn to classify noisy images or distorted images, or rotated images.
For making custom models, is it a good choice to switch to Pytorch?
I’ve seen discussions on Kaggle where people use translation for data augmentation for text. Say you’re doing a text classification task in English, you can use Google Translate to do English -> Spanish (or another language) -> back to English and get augmented text data.
Sometimes, I train a network which ends up predicting always the same class. Got a hard time finding out why, but it seems that it’s more or less fixed by reducing dropout.
Is it because by using the default dropout (ps=0.5), the network is not complex enough to properly use input features for my case? Or is it because of other reasons, i.e. strong classes imbalance (which is the case)
I’m trying to get some intuition about that. Has anyone already experienced this kind of issues?
What do you mean by “switch”? The models that fastai uses are Pytorch models.
Here is the info and dates for part 2: https://www.usfca.edu/data-institute/certificates/deep-learning-part-two
I accidentally discovered this idea from here:
It is 9 months old !!
will there be an international setup like for part v3 part 1
If we are unable to attend part 2 in person, can we get the certification by participating in the live?
I personally tried this in one of the project and I can confirm translation augmentation works. (At least it worked in my client’s use-case) @cedric
I think only those attending in person get the certificate
the heat-map on the cat’s face - does it indicate pixel intensities (as in this gray-scale image) or does it somehow indicate the most prominent features or something?
Please wait for Jeremy to go over it before asking questions
I understand that … my concern is for those commuting to the class , the benefit of in person isn’t much greater that simply participating in the forums online since the in person class size is now so large.
Can someone confirm this reasoning of dropout vs L2 regularization
L2 regularization affects all the parameters in all the activations per epoch, whereas dropout affects some of the parameter per epoch. Which means if we are too aggressive with L2 we may need less epochs, but that may not be the case for dropout. For dropout to work we have to have a bigger epoch number for it to work.
Bottomline dropout can be more effective for longer epoch, whereas L2 can be more effective for shorter epochs.
I think you are referring to this Kaggle augmentation for text discussion?
Convolution in GIMP docs: https://docs.gimp.org/2.8/en/plug-in-convmatrix.html
Have you able to use fastai on this one? I tried the Kaggle kernel but didn’t get it to work, plus they are using fastai 0.7
We will announce more details about part 2 later, but all the core aspects will remain the same: we will offer some form of remote access (as we have for all sessions since we began) and the certificates will only be available for the in-person course (since the certificates are through the Data Institute).
sorry, my 7 min went longer and I thought I missed it