@sebastianruder @jeremy
Sorry if it is not a relevant question…
I am looking for extracting encoding from the ULMFiT pre-trained model for sentences.
From the tutorial I am modifying like this,
learn = language_model_learner(data_lm, AWD_LSTM, drop_mult=0.3)
#Using only the embedding layer
learn.model = learn.model[0]
learn.predict("I would") #throws error.. I was execting a tensor list to be returned
Is there a way to get the embeddings from the ULMFiT directly like BERT, and Universal Sentence Encoder ?
Since the next word prediction works well in ULMFiT I would like to use the same for sentence similarity. Please throw some light on this…