I’m not sure if i should post this in a different section, but i decided to write it here since it’s fairly close to embeddings and language model that we’ve done in Lesson 4.

I’m currently trying to implement this algorithm called Node2Vec to learn embeddings of nodes in a mobile network - basically we have all the information on which numbers interact with each other in a telecom company, and i have some questions that some of you might have expertise on.

Here’s the paper and the implementation on github:

Question 1: It is my understanding that the idea of the paper is to take a node, implement a fixed number of random walks (num walks) of fixed length (walk length) to create a standard column data and then use sgd to create embeddings - is this correct?

Question 2: I think the big thing about the paper is the way they implement these walks (tradeoff between different search algorithms), and the rest of it is exactly the same as word2vec/any other language model - is this correct?

Question 3: It seems like the largest share of computation time lies in creating the random walks, and then the sgd part doesnt take that much time - does this mean that re-implementing it in PyTorch wouldn’t really speed it up much, since the random walk part wouldnt benefit much from being on the GPU?

I understand that this isn’t part of the course and isn’t even really “deep learning”, but since there’s a lot of smart and knowledgeable people on this forum i decided to give it a try. Thank you for reading!