Lesson 6 In-Class Discussion


(Sanyam Bhutani) #22

I Couldn’t find the ‘dislike’ button on the forums for this comment :stuck_out_tongue:


(Dipjyoti Bisharad) #23

Why the transpose of embedding matrix is used for computing PCA?


(Pete Condon) #24

I assumed that the likes were ironic :wink: (including mine)


(yinterian) #25

Take a careful look the formula.


(James Dietle) #26

“Tank girl” is dialog driven?

https://www.movieposter.com/posters/archive/main/95/MPW-47507

Could it be how surreal or satiric the movies are? But then where is Momento?


(Kevin Bird) #27

Where is that formula located?

Correct Answer: https://github.com/fastai/fastai/blob/master/fastai/column_data.py (Line 184)


(yinterian) #28

forward function inside the class.


(ecdrid) #29

From the fellows who had done this course previously(this helps a lot)


(Kevin Bird) #30

I’ll make a forum post. I’m still a bit confused.


(yinterian) #31

no, look at the collaborative filtering model. Here

class EmbeddingDotBias(nn.Module):
    def __init__(self, n_factors, n_users, n_items, min_score, max_score):
        super().__init__()
        self.min_score,self.max_score = min_score,max_score
        (self.u, self.i, self.ub, self.ib) = [get_emb(*o) for o in [
            (n_users, n_factors), (n_items, n_factors), (n_users,1), (n_items,1)
        ]]

    def forward(self, users, items):
        um = self.u(users)* self.i(items)
        res = um.sum(1) + self.ub(users).squeeze() + self.ib(items).squeeze()
        return F.sigmoid(res) * (self.max_score-self.min_score) + self.min_score

(louis duverger) #32

what would be the difference between shallow embedding and deep learning embedding ?

  • shallow : get through s dot product matrix multiplication
  • deep learning: initiate several layers on the top of a one-hot encoding or any other classic categorical encoding
    is it something like this?

(K Sreelakshmi) #33

Is Shallow learning on large datasets in NLP is faster to get Embeddings than doing a NeuralNet first on?


#34

What was the last question? Didn’t hear it


(Erin Pangilinan) #35

They were asking about applying some techniques from CV to NLP.


(Ankit Goila) #36

I think related to this:


(Pete Condon) #37

I thought it was a question about using transfer learning for NLP (like is done in vision)


(Louis Guthmann) #38

When using categories in Pandas, how to keep the same mapping between the train and test if they are not merged in the first place ?


(Arjun Rajkumar) #39

In Rossmann - what is y_range?


(ecdrid) #40

Check out Imputer from the docs


(Erin Pangilinan) #41

Correct. I was saying that he was replying with that.