00:00:04 More cool guides & posts made by Fast.ai classmates
“Improving the way we work with learning rate”, “Cyclical Learning Rate technique”,
“Exploring Stochastic Gradient Descent with Restarts (SGDR)”, “Transfer Learning using differential learning rates”, “Getting Computers to see better than Humans”
00:03:04 Where we go from here: Lesson 3 -> 4 -> 5
Structured Data Deep Learning, Natural Language Processing (NLP), Recommendation Systems
00:05:04 Dropout discussion with “Dog_Breeds”,
looking at a sequential model’s layers with ‘learn’, Linear activation, ReLu, LogSoftmax
00:18:04 Question: “What kind of ‘p’ to use for Dropout as default”, overfitting, underfitting, ‘xtra_fc=’
00:23:45 Question: “Why monitor the Loss / LogLoss vs Accuracy”
00:25:04 Looking at Structured and Time Series data with Rossmann Kaggle competition, categorical & continuous variables, ‘.astype(‘category’)’
00:35:50 fastai library ‘proc_df()’, ‘yl = np.log(y)’, missing values, ‘train_ratio’, ‘val_idx’. “How (and why) to create a good validation set” post by Rachel
00:50:40 Dealing with categorical variables
like ‘day-of-week’ (Rossmann cont.), embedding matrices, ‘cat_sz’, ‘emb_szs’, Pinterest, Instacart
01:07:10 Improving Date fields with ‘add_datepart’, and final results & questions on Rossmann, step-by-step summary of Jeremy’s approach
Pause
01:20:10 More discussion on using Fast.ai library for Structured Data.
01:23:30 Intro to Natural Language Processing (NLP)
notebook ‘lang_model-arxiv.ipynb’
01:31:15 Creating a Language Model with IMDB dataset
notebook ‘lesson4-imdb.ipynb’
01:31:34 Question: “So why don’t you think that doing just directly what you want to do doesn’t work better?” (referring to the pre-training of a language model before predicting whether a review is positive or negative)
01:33:09 Question: “Is this similar to the char-rnn by karpathy?”
01:39:30 Tokenize: splitting a sentence into an array of tokens
01:43:45 Build a vocabulary ‘TEXT.vocab’ with ‘dill/pickle’; ‘next(iter(md.trn_dl))’
The rest of the video covers the ins and outs of the notebook ‘lesson4-imdb’, don’t forget to use ‘J’ and ‘L’ for 10 sec backward/forward on YouTube videos.
02:11:30 Intro to Lesson 5: Collaborative Filtering with Movielens
Notes
Embeddings vs One-Hot Encoding: Embeddings are better than One-Hot Encodings because it allows for relationships to be shown between days. (Saturday and Sunday are both weekends). One-Hot Encoding shows each value perfectly equal to each other. Wednesday and Saturday have the same relationship as Saturday and Sunday. In other words, Embedding gives a neural network a chance to learn “rich representations”.
Overfitting vs. Underfitting, an example
training, validation, accuracy
0.3, 0.2, 0.92 = under fitting
0.2, 0.3, 0.92 = over fitting
Thanks - fixed now. FYI the z flag to tar is now redundant AFAIK - it figures it out for itself. (Although I’m still glad to have this fixed since it was using unnecessary space!)
Thank you. Could you please also include this IMBd data into /datasets/fast.ai/ in crestle? I don’t know why is it unpacking so long, but this took more than 2 hours.
I have a question regarding the RMSPE (Root Mean Square Percentage Error) calculation. In the video (https://youtu.be/gbceqO8PpBg?t=39m45s) Jeremy makes the point that ln(a/b) = ln(a)-ln(b). However, I don’t see how this relates to the calculation of RMSPE using exp_rmspe.
We can express this in two lines as:
pct_var=(targ-y_pred)/targ
RMSPE = sqrt(mean(pct_var^2))
Since we took the ln of the data previously, we now need to take the exponent. So, in 3 lines:
targ=exp(targ); y_pred=exp(y_pred)
pct_var=(targ-y_pred)/targ
RMSPE = sqrt(mean(pct_var^2))
It looks like that’s exactly what the function exp_rmspe does:
What would you do in the situation where you have missing data. For example, imagine the Rossmann data but you only had weather data for the two most recent years. You would like to include all the years for which you don’t have weather data because you have other features.
One idea would be to turn a continuous variables into a categorical variables with bins so that the years you don’t have temperature can just be their own bin and put into the embedding layers?
I m not clear on embedding matrices. We start with one rank tensor (one row * n col). Then we create 7*4 matrix for weeks. We pick sun in one rank tensor and replace with 4 col value. Now I m not clear how 4 col will fit in our one rank tensor.
Is there somewhere we can access the Arxiv dataset used for the language modeling notebook? The path in the notebook is /data2/datasets/part1/arxiv/, it’s not in the fastai Paperspace machine or on files.fast.ai/data. Is it available to us anywhere?
If anybody else gets a stack trace on the first cell of lesson-4-imdb.ipynb, you probably don’t have the ‘en’ spacy model installed (I didn’t, using the fast.ai paperspace machine image). You can check by inserting a cell with just
import spacy
spacy.load('en')
If that fails, then (in another terminal) run python -m spacy download en and after a few minutes you’ll have that model and it’ll work.
Similar, I didn’t have a data/aclImdb/models directory to save the TEXT.
nikhil,
Before this would have been one hot encoded as it is categorical and not continuous. The # of embeddings will also affect the shape of the weights.