Using the DeVISE technique to make the commands accepted by virtual assistants more varied!

Hello people,
I was just working through the previous weeks notebooks (translate.ipynb and devise.ipynb).
Most of the virtual assistants today expect fixed commands for any tasks.
It occurs to me that the concept of DeVISE that Jeremy introduced us to in the last class can be combined with the seq2seq translation to make commands we input to virtual assistants more varied and “human-like”.

The idea entails following steps:

  • Use the hidden state after the encoder in the language translation model as Embeddings for sentences.
  • Use those embeddings to match up(using same method as in the devise notebook) to the closest semantically same command in our virtual assistants dictionary.
  • To ensure that the hidden state contains the information regarding the semantic meaning of the sentence,
    one may consider training seq2multiseq models, as in, use a single hidden state from encoder to translate to multiple languages using different decoder of their own. Or if no such dataset is available, one can match the vector spaces of the hidden states from encoders of different language translators, in a similar way to how jeremy matched image class and word vector spaces.

Any suggestions/considerations on the above idea.
I am working on the notebook to test out the idea, shall share the same when the things get presentable.

2 Likes

@narvind2003 It seems on your thread regarding “What does encoder actually learn?”, you were trying to analyze the thing i am trying to achieve here. I am trying to work on a notebook on above ideas. Meanwhile, any feedbacks on the same would be welcomed.

Sure. This was one of my motivations as well. We have built such chatbots with word embeddings but now with our encoders, there is an opportunity to make them work much better.