http://academictorrents.com/details/a4fee5547056c845e31ab952598f43b42333183c
In my experience with the wiki103 LM, I have noticed it favors past tense sentences over present tense, which I suspect is because of most sentences in wikipedia article are in pas tense.
What would be a good source for transfer learning to medical notes models?
edit: physicians use a lot of abbreviations
Are LSTMs involved in making the language model?
Sounds like the wiki text 103 might be a good start for a transfer learning model
yes, you’ll see them shortly
Yes, the backbone of ULMFit is an AWD-LSTM
Rachel’s mic didn’t seem to be tied into the live stream.
And then you’d need to finetune your model to your medical scripts corpus.
Has transfer learning for NLP been proven useful for other languages than English?
is there bias in language models? such as gender, race in embeddings? how to deal w it?
I will check about this next time.
There are many algorithms - statistical as well as neural (LSTMs) that can be used to make language models.
Does the wiki-text model and the fine-tuned version share the same vocabulary?
As a matter of fact yes! Check the language model zoo topic
I can hear you clearly.
Curious to see if a similar model for Russian exists. Articles on Wikipedia in other languages can be much more limited in quantity and length.
yes, always. there are debias techniques…
can the transfer learning of wiki text can be used different domains which wiki text does not have any context ?