Wiki: Lesson 4

<<< Wiki: Lesson 3Wiki: Lesson 5 >>>

Lesson links

Articles

Video timeline

  • 00:00:04 More cool guides & posts made by Fast.ai classmates
    “Improving the way we work with learning rate”, “Cyclical Learning Rate technique”,
    “Exploring Stochastic Gradient Descent with Restarts (SGDR)”, “Transfer Learning using differential learning rates”, “Getting Computers to see better than Humans”

  • 00:03:04 Where we go from here: Lesson 3 -> 4 -> 5
    Structured Data Deep Learning, Natural Language Processing (NLP), Recommendation Systems

  • 00:05:04 Dropout discussion with “Dog_Breeds”,
    looking at a sequential model’s layers with ‘learn’, Linear activation, ReLu, LogSoftmax

  • 00:18:04 Question: “What kind of ‘p’ to use for Dropout as default”, overfitting, underfitting, ‘xtra_fc=’

  • 00:23:45 Question: “Why monitor the Loss / LogLoss vs Accuracy”

  • 00:25:04 Looking at Structured and Time Series data with Rossmann Kaggle competition, categorical & continuous variables, ‘.astype(‘category’)’

  • 00:35:50 fastai library ‘proc_df()’, ‘yl = np.log(y)’, missing values, ‘train_ratio’, ‘val_idx’. “How (and why) to create a good validation set” post by Rachel

  • 00:39:45 RMSPE: Root Mean Square Percentage Error,
    create ModelData object, ‘md = ColumnarModelData.from_data_frame()’

  • 00:45:30 ‘md.get_learner(emb_szs,…)’, embeddings

  • 00:50:40 Dealing with categorical variables
    like ‘day-of-week’ (Rossmann cont.), embedding matrices, ‘cat_sz’, ‘emb_szs’, Pinterest, Instacart

  • 01:07:10 Improving Date fields with ‘add_datepart’, and final results & questions on Rossmann, step-by-step summary of Jeremy’s approach

Pause

  • 01:20:10 More discussion on using Fast.ai library for Structured Data.

  • 01:23:30 Intro to Natural Language Processing (NLP)
    notebook ‘lang_model-arxiv.ipynb’

  • 01:31:15 Creating a Language Model with IMDB dataset
    notebook ‘lesson4-imdb.ipynb’

  • 01:31:34 Question: “So why don’t you think that doing just directly what you want to do doesn’t work better?” (referring to the pre-training of a language model before predicting whether a review is positive or negative)

  • 01:33:09 Question: “Is this similar to the char-rnn by karpathy?”

  • 01:39:30 Tokenize: splitting a sentence into an array of tokens

  • 01:43:45 Build a vocabulary ‘TEXT.vocab’ with ‘dill/pickle’; ‘next(iter(md.trn_dl))’

  • The rest of the video covers the ins and outs of the notebook ‘lesson4-imdb’, don’t forget to use ‘J’ and ‘L’ for 10 sec backward/forward on YouTube videos.

  • 02:11:30 Intro to Lesson 5: Collaborative Filtering with Movielens

Notes

Embeddings vs One-Hot Encoding: Embeddings are better than One-Hot Encodings because it allows for relationships to be shown between days. (Saturday and Sunday are both weekends). One-Hot Encoding shows each value perfectly equal to each other. Wednesday and Saturday have the same relationship as Saturday and Sunday. In other words, Embedding gives a neural network a chance to learn “rich representations”.

Overfitting vs. Underfitting, an example

training, validation, accuracy
0.3, 0.2, 0.92 = under fitting
0.2, 0.3, 0.92 = over fitting
13 Likes

thanks @grez911 giving this a shot now…

1 Like

@anurag any chance that could be added to the crestle template?

1 Like

Thanks - fixed now. FYI the z flag to tar is now redundant AFAIK - it figures it out for itself. (Although I’m still glad to have this fixed since it was using unnecessary space!)

2 Likes

Done. spacy.load(‘en’) works as expected.

3 Likes

Thank you. Could you please also include this IMBd data into /datasets/fast.ai/ in crestle? I don’t know why is it unpacking so long, but this took more than 2 hours.

Now available under /datasets/fast.ai/data/aclImdb.

3 Likes

I have a question regarding the RMSPE (Root Mean Square Percentage Error) calculation. In the video (https://youtu.be/gbceqO8PpBg?t=39m45s) Jeremy makes the point that ln(a/b) = ln(a)-ln(b). However, I don’t see how this relates to the calculation of RMSPE using exp_rmspe.

RMSPE (https://www.kaggle.com/c/rossmann-store-sales#evaluation) is defined as sqrt(mean(((targ-y_pred)/targ)^2))

We can express this in two lines as:
pct_var=(targ-y_pred)/targ
RMSPE = sqrt(mean(pct_var^2))

Since we took the ln of the data previously, we now need to take the exponent. So, in 3 lines:
targ=exp(targ); y_pred=exp(y_pred)
pct_var=(targ-y_pred)/targ
RMSPE = sqrt(mean(pct_var^2))

It looks like that’s exactly what the function exp_rmspe does:

def exp_rmspe(y_pred, targ):
    targ = inv_y(targ)
    pct_var = (targ - inv_y(y_pred))/targ
    return math.sqrt((pct_var**2).mean())

This all makes sense, but I don’t see how any of it relates to ln(a/b) = ln(a)-ln(b).

Help?

2 Likes

What would you do in the situation where you have missing data. For example, imagine the Rossmann data but you only had weather data for the two most recent years. You would like to include all the years for which you don’t have weather data because you have other features.

One idea would be to turn a continuous variables into a categorical variables with bins so that the years you don’t have temperature can just be their own bin and put into the embedding layers?

Just sharing my notes for this lesson:

Training, validation, test sets, and notes on dropout:

Encoding, structured data predictions, including some extra notes on one hot encoding:

Natural language processing (I kept running into errors running the code here):

2 Likes

See the ML course here - we show how to handle missing data in some detail. (TL;DR - fastai can do it for you)

3 Likes

Thanks for sharing your notes!

1 Like

Wrote my first blogpost on Entity Embedding of categorical variables for structured data, hope you find it useful. Any suggestions are most welcome.

2 Likes

I m not clear on embedding matrices. We start with one rank tensor (one row * n col). Then we create 7*4 matrix for weeks. We pick sun in one rank tensor and replace with 4 col value. Now I m not clear how 4 col will fit in our one rank tensor.

1 Like

Is there somewhere we can access the Arxiv dataset used for the language modeling notebook? The path in the notebook is /data2/datasets/part1/arxiv/, it’s not in the fastai Paperspace machine or on files.fast.ai/data. Is it available to us anywhere?

1 Like

If anybody else gets a stack trace on the first cell of lesson-4-imdb.ipynb, you probably don’t have the ‘en’ spacy model installed (I didn’t, using the fast.ai paperspace machine image). You can check by inserting a cell with just

import spacy
spacy.load('en')

If that fails, then (in another terminal) run python -m spacy download en and after a few minutes you’ll have that model and it’ll work. :slight_smile:

Similar, I didn’t have a data/aclImdb/models directory to save the TEXT.

12 Likes

nikhil,
Before this would have been one hot encoded as it is categorical and not continuous. The # of embeddings will also affect the shape of the weights.

Hi @jeremy ,
I tried using embedding on different dataset using keras.


The loss graph is very weird.
What am i doing wrong?

Can you share your whole notebook?

Here is the link of my notebook https://github.com/SmitSheth/Passenger-Survival-Analysis/blob/master/titanic.ipynb