Loading pretrained embeddings with some missing text

How would you handle loading a pretrained word embedding that is missing some important words in your dataset?

Should I be loading the word embedding then adding additional rows for each additional word and initializing them with a random set of weights then training them?

That’s all I’ve done in the past, although I don’t think it’s ideal. I’ll be doing some research on just this issue after the course, as it happens, since I don’t think anyone has done it well yet.