Your code snippet has def get_rnn_classifier
highlighted, and so I wanted to make sure that your recommendation was indeed to swap out the call to LinearDecoder
, and not PoolingLinearClassifier
, with a call to my custom classifier (that will attempt to classify the three things noted above simultaneously.
Is that correct?
If so, the process would be for me to:
1.Create an instance of a LanguageModelData
object
md = LanguageModelData(...)
2.Define my own pytorch model using my custom classifier that will output three values:
rnn_enc = RNN_Encoder(bs, n_tok, emb_sz, nhid, nlayers, pad_token, dropouth=dropouth, dropouti=dropouti, dropoute=dropoute, wdrop=wdrop)
enc = rnn_enc.encoder if tie_weights else None
model = SequentialRNN(rnn_enc, VerbatimClassifier(n_tok, emb_sz, dropout, tie_encoder=enc))
3.Then set the learner to use my pytorch model
learner.models.model = model
4.Then train in usual way
In the notebooks the approach has been to train a language model and then use it in another model for sentiment analysis. But, if I’m understanding you correctly, we are bypassing the second step and doing both tasks at the same time.