Embedding matrix of a pretrained language model

When we fine tune the AWD_LSTM pretrained on wikitext, do we replace its embedding matrix with another one from our dataset?

Here’s how I understood it:

When you fine-tune ULMFiT, fastai extends the vocab from pre-training with new tokens from your fine-tuning corpus. Then, embedding vectors corresponding to tokens from the pre-training corpus are initialized with the embeddings from the pre-trained model while embedding vectors corresponding to new tokens are initialized randomly.