@sgugger wrote a tutorial for using HuggingFace Transformers with fastai2 with the example of GPT-2 text generation fine-tuning:
http://dev.fast.ai/tutorial.transformers
Very interesting work! Thanks for the tutorial!
@sgugger wrote a tutorial for using HuggingFace Transformers with fastai2 with the example of GPT-2 text generation fine-tuning:
http://dev.fast.ai/tutorial.transformers
Very interesting work! Thanks for the tutorial!
Amazing tutorial! Thanks for sharing.
All the credit goes to Sylvain!
@sgugger I am curious if you had tried text classification yet? Also, is there a reason you used GPT-2 as opposed to BERT? It seems like most of this code would transfer well for BERT and related models (RoBERTa,ALBERT,etc.), right?
Thanks for highlighting @ilovescience, its a great tutorial, I really like using the callback to only use the relevant model output, wish I thought of that!
As well as the text classification notebook (single sentence only for now) I just added a notebook to FastHugs on how to pretrain or fine tune a transformer language model (RoBERTa in this case) which uses a masked token task as opposed to next token prediction.
No doubt it can be optimised, but it should be useful in fine-tuning a transformer LM on domain-specific data before training a classifier head
I haven’t tried text classification yet, though it should be as easy to port. And yes, all the language models would work, I just chose GPT2 because I wanted to try this one
This link does not work anymore Where can I see this awesome tutorial?