Me too. I think Moustapha Cisse from facebook introduced that in deep learning indaba this year.
the code in rossman_data_clean specifies loading multiple files: table_names = [‘train’, ‘store’, ‘store_states’, ‘state_names’, ‘googletrend’, ‘weather’, ‘test’]
but only train test and store are available on kaggle website. Where do we get the rest?
Has an activation map kind of technique been tried for structured/tabular data? To understand which variables move the classification from one class to another.
If you start a topic about weight norm in the advanced section, Jeremy will answer it there (it will also be covered in part 2).
Someone asked the same question in lesson 4.
Check out my reply there?
I think, it’s not the best answer.
Check Sylvain’s post out: Mixup data augmentation
About part 2, do you already know when it will take place ? (sorry if that’s already answered elsewhere but I didn’t find it)
Data augmentation can help fill the gaps in your image database. For example, if your database has only clean images in a certain orientation, data augmentation can help your network learn to classify noisy images or distorted images, or rotated images.
For making custom models, is it a good choice to switch to Pytorch?
I’ve seen discussions on Kaggle where people use translation for data augmentation for text. Say you’re doing a text classification task in English, you can use Google Translate to do English -> Spanish (or another language) -> back to English and get augmented text data.
Sometimes, I train a network which ends up predicting always the same class. Got a hard time finding out why, but it seems that it’s more or less fixed by reducing dropout.
Is it because by using the default dropout (ps=0.5), the network is not complex enough to properly use input features for my case? Or is it because of other reasons, i.e. strong classes imbalance (which is the case)
I’m trying to get some intuition about that. Has anyone already experienced this kind of issues?
What do you mean by “switch”? The models that fastai uses are Pytorch models.
Here is the info and dates for part 2: https://www.usfca.edu/data-institute/certificates/deep-learning-part-two
I accidentally discovered this idea from here:
It is 9 months old !!
will there be an international setup like for part v3 part 1
If we are unable to attend part 2 in person, can we get the certification by participating in the live?
I personally tried this in one of the project and I can confirm translation augmentation works. (At least it worked in my client’s use-case) @cedric
I think only those attending in person get the certificate
the heat-map on the cat’s face - does it indicate pixel intensities (as in this gray-scale image) or does it somehow indicate the most prominent features or something?
Please wait for Jeremy to go over it before asking questions