I was just working through the previous weeks notebooks (translate.ipynb and devise.ipynb).
Most of the virtual assistants today expect fixed commands for any tasks.
It occurs to me that the concept of DeVISE that Jeremy introduced us to in the last class can be combined with the seq2seq translation to make commands we input to virtual assistants more varied and “human-like”.
The idea entails following steps:
- Use the hidden state after the encoder in the language translation model as Embeddings for sentences.
- Use those embeddings to match up(using same method as in the devise notebook) to the closest semantically same command in our virtual assistants dictionary.
- To ensure that the hidden state contains the information regarding the semantic meaning of the sentence,
one may consider training seq2multiseq models, as in, use a single hidden state from encoder to translate to multiple languages using different decoder of their own. Or if no such dataset is available, one can match the vector spaces of the hidden states from encoders of different language translators, in a similar way to how jeremy matched image class and word vector spaces.
Any suggestions/considerations on the above idea.
I am working on the notebook to test out the idea, shall share the same when the things get presentable.