Lesson 7 in-class chat ✅

I think the main problem I’m having here is that predicting the next word against this data set of english words for numbers is a bit dull - I’d find this section a lot more compelling if it was working against a more interesting word corpus (like multi-word movie names or something)

thank you! translation sounds great idea.

why Gan not on NLP though?

Considering paragraphs of text, there isn’t something distinctive to being the 35th word (as opposed to the 34th word) that you need a network to learn in general.

I’m not sure what you mean by “set of word embedding”? Typically you would use the same embeddings for the whole task.

That’s just for the lesson. We have been doing the exact same thing on IMDB reviews.

Yes, this is a simple task, in the interest of time (since we had only a fraction of the class to cover it in). The super-res was the star task for today.

2 Likes

Yeah, I was having the same issue. This was due to my environment being made months ago before pytorch v1 was released a few weeks ago. During this lecture, I updated my graphics drivers, installed the new pytorch v1 with cuda10 and made a new conda env, new repo, etc and everything is working fine now with 1.0.37.

2 Likes

What’s the best forum section to interact with peers from now on ? This one, or the general part 1 section (or another one) ?

can we collect all interesting blog posts (understandable, well written) by fastai students in a list please?

Can we have Jeremy talk about anything every week. I’ve kinda got used to this.

18 Likes

What a great way to spend our time. Thanks for the teaching! Love what we are going to be able to do with fast.ai!

3 Likes

Thank you fastai team! It really inspires me to do better time management, learn DL well, and be able to teach others as well. Let me see how far I get…

3 Likes

Here’s something to keep you happy: https://www.youtube.com/watch?v=v16uzPYho4g. That was just yesterday. Got most of the way through- really liking it- haven’t heard him talk about a lot of this stuff that he’s covering here.

11 Likes

Many many many thanks to Jeremy, Rachel, Sylvain and all the others for this course. I and so many others have learned so much it’s incredible. This course is fantastic and very thoughtfully made. Thank you for all the effort and time you all put into it, I’m very grateful.

15 Likes

Thank you Jeremy, Rachel and Fastai team for this wonderful course.

5 Likes

Thanks Jeremy, Rachel and the team! Have you considered moving to Melbourne :):grin:

1 Like

If life is a Unet, they have no choice but to go back to Melbourne! But a better one this time!

2 Likes

Well done, Jeremy, Rachel, Sylvain and everyone in the forums. Very much indebted to all of you and deeply inspired to keep learning and building.

7 Likes

Thank you! It has been an amazing time taking these live lessons. The fastai team, the course, the forums, the work shared… so inspiring.

2 Likes

Fingers crossed! Or maybe crossed Unet? hmm…

1 Like

In the process of explaining my question more clearly, i think i was to able to understand more and answer myself. Please confirm if my understanding is correct

Lets say we chose n dimensions to represent each word we have in the corpus and assume one of dimensions is capturing sentiment. So for words expressing sentiment, this particular dimension will be more activated (e.g. ‘celebrating’- positive, so the sentiment dimension will be more activated). Similarly assuming a dimension for ‘fun’, this will be more activated as well for ‘celebrating’ word.
Next comes the weight matrix of hidden layer. Now this weight matrix will be tuned to further combine the activations of ‘celebrating’ word(one of them representing ‘sentiment’ dimension along with other things like ‘fun’) and make a complex feature(like probably ‘partying’). So having the same weight matrix for all words means we can extract one feature from every word for each RNN node.

But we can have many RNN nodes in the hidden layer, where each node learns one meaningful feature from the activations of word embeddings. So having same weight matrix helps to extract a specific feature is present in that each word

1 Like