Lesson 8 - Official topic

Ah you may be correct, sorry for misusing the terminology! What I meant was to use two networks (one for tabular, one for text) and then concatenate them at the head in order to produce a single output.

Zach wrote a nice notebook demonstrating classification using multiple images. Back in Fastai v1, I made a model for multivariate regression using image and tabular data, and other Fastai users came up with great implementations for classification using image, text, and tabular data and classification using text and tabular data.

5 Likes

Really enjoyed this lesson recreating AWD_LSTM. Thank you for all your efforts.

Thank you so much for the awesome series of lessons! Going through the concepts and code examples of fastbook and attempting to answer questions at the end based on my understanding has been a real constructive exercise - good chance to find out what I thought I understood but didn’t.

Fast.ai Part 1 Round 2 support group anyone?

I plan on going through the chapters again, starting with Ch1 after a week off. Timing wise currently thinking Mondays or Tuesdays 6-9pm PST or Sunday afternoon. If this is of interest to you, heart this post and I’ll set up a google form to manually organize people into post-class support groups. Format will be silently re-reading the chapters or implementing notebooks, followed by 30 mins of discussion.

12 Likes

I’d love a Tuesdays 6-9PM PST schedule myself, if at all possible.
What do you all think?

This is awesome! I’ve continued the discussion here for anyone that might be interested in joining the reading groups

Thank you Jeremy, Sylvain and Rachel for your efforts to create and deliver this fantastic 4th incarnation of course1.

I can’t echo the feelings of those who are sad that it’s over – because it’s not over unless you want it to be.

We have our work cut out for us – to review and get at the marrow of each of the 8 lessons. And I’m looking forward to it!

2 Likes

Thank you Jeremy, Sylvain and Rachel for the fantastic course! Looking forward to part 2!

Chapter 10 and the ULMFiT paper indicates that training a bidirectional model reduces the error rate on IMDB by almost 1%.

  1. Does this mean that the base LM trainer on wikitext is trained backward and then then we further fine tune this LM with the IMDB dataset in the sale backward direction?

  2. By backward does it mean that every sequence of words in text and text_ are just flipped around and the LM’s task is to predict the first word in the sentence in this case rather than the next?

The reason why tokenization techniques like stemming or lemmatization are not recommended when training neural networks, they essentially throw away certain useful pieces of information about the vocabulary and about the language.

I have seen people still use these techniques in Information Retrieval domain to improve the recall. So it depends on the context and knowing when to use & when not to use them.

2 Likes

More or less actually! You can see my example notebook I experimented with this (and sentence piece too on) back in v1, but it’s still the same thing in terms of concepts :slight_smile: (it just shows sentence piece in terms of show_batch but you can see the backwards sentences)

Also Rachel discusses this too in her NLP course as well

1 Like

Hi everyone! I have been trying to get started with NLP but I struggle to get a very simple example to work and I do no longer know what to try out.

I have a dataframe with several columns most of which I do not need. Among them, the useful ones are my x (‘Answered Questions’) and my y (‘Classification’). I manage to successfully build a language model with it and I am only missing the classifier.

I am struggling a lot to pass the y as a label… what am I missing here?

 def get_y(r): return r['Classification']

dls_clas = DataBlock(
    blocks=(TextBlock.from_df('Answered Questions', vocab=dls_lm.vocab, seq_len=dls_lm.seq_len), CategoryBlock),
    get_x=ColReader('text'),
    get_y=get_y,
    splitter=RandomSplitter(0.1)).dataloaders(data, bs=128)

I have the feeling I am almost there but I get `TypeError: ‘str’ object cannot be interpreted as an integer -> KeyError: ‘Classification’

Setting up Pipeline: get_y -> Categorize
(error happens here!)

This was my best attempt in adapting the fastai text tutorial. :pensive:

Thanks a lot @muellerzr! I’ll try re-implementing that!

Have you by any chance worked on visualising the trained embeddings using PCA? I have been Trying to do this but without much luck.

Also have you looked into the slanted triangular learning rates introduced in ULMFiT or do you have any resources for that? I’m trying to work on the IMDB_SAMPLE dataset to try out various quick experimentations as mentioned in the paper on it while not overfitting that model as it’s a tiny dataset!

Is ‘data’ in this case, which is passed to your dataloaders your pandas dataframe?

I haven’t. I can provide a resource someone did for the tabular models if you think that would help :slight_smile:

Nope! I just followed the pattern instead.

Hard not to, but good idea :wink: Perhaps play with a ton of dropout to see if it helps

1 Like

Yeah, would love to try that out myself!

Haha, yeah noticed that! :sweat_smile:

There’s a wonderful notebook by @Pak here:
https://github.com/Pak911/fastai-shared-notebooks/blob/master/interpret_tabular.ipynb

Jeremy covered this as well in the tabular lecture a bit:

1 Like

Thanks!! I finally found the mistake (which I would suggest to clarify in the documentation). So basically, the dataframe that is passed (data in my case) must have only two columns. The x can be called whatever but the y must be called label!

db_clas = DataBlock(

    blocks=(TextBlock.from_df('Answered Questions', vocab=dls_lm.vocab, seq_len=dls_lm.seq_len), CategoryBlock),

    get_x=ColReader('text'),

    get_y=ColReader("label"),

    splitter=RandomSplitter(0.1))

This last bit was not clear to me by reading the documentation.

1 Like

How can I get the indices of my data which are in train and valid set from RandomSplitter? I tried dls_clas.val2idx but I find not way to extract the indices column

Hi all,

In the ch10 text classifier fine tuning, the discriminative learning slice has specific 2.6**4 constant. Is there a blog or some experiments on where this came from? I searched the forums, and I didn’t find an answer. Here’s the code snippet.

learn.freeze_to(-2)
learn.fit_one_cycle(1, slice(1e-2/(2.6**4),1e-2))

Thanks,
Daniel