Word embeddings for related/associated words

Hi everybody,

The way I understand “standard” word embeddings and cosine similarity, it will find the most similar vector to the input vector.

For example:
Microsoft would yield Apple, Samsung, IBM.
These are all gigantic tech companies and the words/names can be assumed to be used in a similar manner in a corpus.
Please correct me if I’m wrong here.

But what if I’m looking for the most related/associated words?
Terms that are commonly referred to in the context of, for example, Microsoft.
Results would include Windows, Bing, Surface, Bill Gates.
Elon Musk would result in Tesla, Space X, Starlink.
My guess is, these vectors would end somewhere in the “vicinity” of Microsoft in vector space.

Is there a way to train embeddings for related vectors?
Or maybe use a different similarity calculation?

I’m very thankful for any suggestions and pointers in the right direction.