Extracting vector from Pooling Classifier


I’ve trained language model and I made classificator for my texts to some classes. Now I’d like to get vector representation of my texts. But I think that instead of embeddings from SequentialRNN, it will be better to get vector from the middle of PoolingClassifier - my strings could be better represented and have better similarity when are in the same class.

My model looks like:

(0): MultiBatchEncoder(
(module): AWD_LSTM(
(encoder): Embedding(20000, 400, padding_idx=1)
(encoder_dp): EmbeddingDropout(
(emb): Embedding(20000, 400, padding_idx=1)
(rnns): ModuleList(
(0): WeightDropout(
(module): LSTM(400, 1152, batch_first=True)
(1): WeightDropout(
(module): LSTM(1152, 1152, batch_first=True)
(2): WeightDropout(
(module): LSTM(1152, 1152, batch_first=True)
(3): WeightDropout(
(module): LSTM(1152, 400, batch_first=True)
(input_dp): RNNDropout()
(hidden_dps): ModuleList(
(0): RNNDropout()
(1): RNNDropout()
(2): RNNDropout()
(3): RNNDropout()
(1): PoolingLinearClassifier(
(layers): Sequential(
(0): BatchNorm1d(1200, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(1): Dropout(p=0.12, inplace=False)
(2): Linear(in_features=1200, out_features=800, bias=True)
(3): ReLU(inplace=True)
(4): BatchNorm1d(800, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): Dropout(p=0.1, inplace=False)
(6): Linear(in_features=50, out_features=498, bias=True)

How can I get the output from “(2): Linear”? Do you think that this vector could be better representation of strings with better similarity between strings in the same class?

Best regards

@jeremy, @sgugger: Do you have any idea?

Please read the etiquette section of the FAQ.

Sorry - my bad.

Best regards

I’ve found a solution thanks to:

I’ve modified the code and it looks like:

def hook(module, input, output):

outputs = []
awd = learn.model

with torch.no_grad():

And the outputs from this layer are stored in ‘outputs’ list.