EmbeddingDropout applies dropout even when not training?

As far as I can tell the EmbeddingDropout module in the fastai library takes no account of self.training and so will still apply dropout even in the case of validation and testing which does not sound correct.

Have I missed something here?

def forward(self, input):
        """ Invoked during the forward propagation of the RNN_Encoder module.
            input (Tensor): input of shape (sentence length x batch_size)

            raw_outputs (tuple(list (Tensor), list(Tensor)): list of tensors evaluated from each RNN layer without using
            dropouth, list of tensors evaluated from each RNN layer using dropouth,
        sl,bs = input.size()
        if bs!=self.bs:
        with set_grad_enabled(self.training):
            emb = self.encoder_with_dropout(input, dropout=self.dropoute if self.training else 0) #<----
            emb = self.dropouti(emb)
            raw_output = emb
            self.hidden = repackage_var(new_hidden)
        return raw_outputs, outputs

In the RNN_Encoder model, it seems to set dropout to 0 during the forward pass.

Hi, thanks for your reply.

Yes I had seen that in the code but the “standard” way that PyTorch dropout modules handle turning off dropout in eval/testing mode is to use the self.training attribute within the dropout module itself not within the module that instantiates it. This attribute is propagated down the module hierarchy when a top level network/module is set to eval or training mode.

That way it is “automatic” as when one sets a network into “training” mode the dropout will be enabled and when set to “eval” mode the dropout will be disabled. Being automatic it cannot be forgotten which is a useful safeguard.

As it currently stands a PyTorch user using the Fastai EmbeddingDropout module might reasonably expect that it is handled in the standard way and so not do anything special and then find they are applying dropout in eval mode without realising.