Hey, I am new to RNN’s and I was doing this tutorial on 'Predicting English word version of numbers using RNN in the Fast ai mooc on NLP. I was trying to understand the implementation of RNN’s and I came across this bit of code.

```
class Model2(nn.Module):
def __init__(self):
super().__init__()
self.i_h = nn.Embedding(nv,nh)
self.h_h = nn.Linear(nh,nh)
self.h_o = nn.Linear(nh,nv)
self.bn = nn.BatchNorm1d(nh)
def forward(self, x):
h = torch.zeros(x.shape[0], nh).to(device=x.device)
res = []
for i in range(x.shape[1]):
h = h + self.i_h(x[:,i])
h = F.relu(self.h_h(h))
res.append(self.h_o(self.bn(h)))
return torch.stack(res, dim=1)
```

So here, the line “self.i_h = nn.Embedding(nv,nh)” on specifying the vocab size (nv) and the dimension of the vector (nh), returns the embedding matrix. After referring to the StackOverfow on how `[Embedding](https://stackoverflow.com/questions/50747947/embedding-in-pytorch)`

works, It seemed that it randomly provides values and we need to specify explicitly whether we want to use Word2Vec , etc. So, I was wondering why didn’t we train the embedding model before on the Numbers Dataset or why didn’t we specify any word embedding model (GloVe,Word2Vec) before finding the embedding vectors?