Good question! I discovered that the test set of the TREC-6 dataset is so small that nearly all reported differences in the literature are statistically meaningless. I think it’s odd that people didn’t report on this - although in the end our paper didn’t mention it either due to space constraints!
However nearly all modern datasets are big enough that confidence intervals are so tiny as to not be an issue.
Yeah it’s basically learning all the details of how the stuff we briefly saw in lesson 4 actually works, along with learning how to do that on larger datasets, faster, using the new fastai.text library (that didn’t exist in part 1). Along with transfer learning on wt103 of course.
@chunduri n-gram models look at n previous words to predict the next one. So, a unigram model looks at just the previous word, bi-gram looks at 2 words and so on. The number of choices which are available for a word reduce significantly as we look at more previous words for e.g ’ a ___ ’ and ’ drink a ___ '. The second blank can be filled with less things than the first, so if we look at perplexity in terms of branching factor as explained in the video, the number of branches it results in reduces, hence the perplexity is lower.
I use it all the time…I leave the notebooks running on a remote server as a nohup process for days…I just refresh my browser page and pick up from where I left…all variables are still there…
df_trn = pd.DataFrame({‘text’:trn_texts, ‘labels’:[0]*len(trn_texts)}, columns=col_names)
The above line in the imdb notebook seems to make all the labels equal to 0 in the data frame. Is that a bug or am I missing something here?
I wonder if it would help the language model at all to include some attempt at representing the etymology of words i.e. Latin, Greek, etc. Or is that just compeltely crazy?
In the imdb notebook inside get_texts(df, n_lbls=1)
the following line:
for i in range(n_lbls+1, len(df.columns)): texts += f’ {FLD} {i-n_lbls} ’ + df[i].astype(str)
I feel should be changed to:
for i in range(n_lbls+1, len(df.columns)): texts += f’ {FLD} {i-n_lbls+1} ’ + df[i].astype(str)
Otherwise we will end up with 2 fields that have xfld=1
you mean root words, which could be common to different language groups. sounds like a great idea.
jeremy was talking about sub-words in class, which divides each words into its roots I think is close to this idea.
I’m struggling with keeping Focal Loss from running out of memory (I’m trying to rewrite it since here there are so many target classes). I’m running the hinge loss version now (that was easier since there’s already a version in PyTorch).
Actually I needed this lesson, the emphasis in conceptual difference: Language models vs. custom embeddings
Somehow I didn’t get such a clear picture after part 1. My mental summary after class 4 part1 was “ok, custom embeddings”. So wrong! (My bad, I’ve rewatched the lesson and it was all already there, crystal clear).
But now finally after this lesson I think I got that “crux” of language model approach to transfer learning. I usually consider if I can not summarize an idea with a few simple sentences probably I dont really have the idea, so I tentatively would try to summarize:
-It is, but no not so much about custom embeddings “initialized” learning the structure of english.
-It is, but no not so much about letting custom embeddings learn classification task
-It is, much more, about both tasks sharing the architecture.
Probably I will reconsider this summary after a couple of more rewatches of lesson but as I said, really usefull all the times both Rachel and Jeremy emphasized “we are not using embeddings, but a language model”. After four or five times of hearing it the “heads up” worked.