How should masking be handled for NLP applications in the new fastai v1?
In Keras you’d have a pad_zero=True
in the Embedding layer and then it would propagate the mask… in fastai I don’t see any masking out of the box. Is it enough to create a dataloader that utilizes pack_padded_sequence
and then pytorch will take care of the rest?