Problems - Pretrained model predict differently in fastai & pytorch

Hi everyone, I encountered a problem when I tried to use pytorch to load a model pretrained in fastai, and make predictions. I did it because I intended to extracted an inner layer’s output, so I first check the last layer’s output to see if it is the same as fastai’s output. But they are different, and both of them remains the same in different runs. I assume my fastai code is correct because I’ve used it for the training and prediction and the result is good.

from pytorch_pretrained_bert.modeling import BertConfig, BertForSequenceClassification
bert_model = BertForSequenceClassification.from_pretrained(config.bert_model_name, num_labels=1)

from pytorch_pretrained_bert import BertTokenizer
bert_tok = BertTokenizer.from_pretrained(“bert-base-uncased”)

text=‘the author concludes the story with this because gardens can not grow with snow on the ground .’

#fastai
learner = Learner(
databunch, bert_model,
loss_func=loss_func,
)
pred=learner.predict(text)

#pytorch
#first convert str to list of id
tokenized_text = bert_tok.tokenize(text)
indexed_tokens = bert_tok.convert_tokens_to_ids(tokenized_text)
tokens_tensor = torch.tensor([indexed_tokens])

with torch.no_grad():
pred=bert_model(tokens_tensor)

For the fastai code, I’m following this:a-tutorial-to-fine-tuning-bert-with-fast-ai

For pytorch code, I’m following the Pytorch official: pretrain bert

2 Likes

Hi, did you come right with this? I have similar issue:

Mismatch between FastAI prediction and Pytorch prediction - PyTorch Forums