AdaptNLP: ULMFiT Approach with Transformers

Hi All,

I’ve been working on an experimental library called AdaptNLP that uses Flair and Transformers to streamline the training, inference, and deployment processes of a ULMFiT (transfer learning) approach with the latest state-of-the-art pre-trained language models.

A couple of features include:

  • Easy-to-use API for running batch inference on state-of-the-art NLP models like:

    • Text/Sequence Classification
    • Token Tagging
    • Span-based Question Answering
  • A ULMFiT approach to fine-tuning Transformers language models and training your own NLP-task Classifiers

    • Finetune language models for BERT, ALBERT, GPT2, etc.
    • Train classifiers that can be loaded into the above mentioned API
  • Deploy open pre-trained or custom-trained models as a microservice

    • Uses FastAPI
    • Two-line steps to deploy Docker
    • GPU compatible

The library is here: https://github.com/Novetta/adaptnlp
and it is available on pypi to be installed with pip install adaptnlp.
Please feel free to try it out!

Due to being in its early development stages, feedback and issue threads would be very much appreciated in the AdaptNLP repo.

Thanks!

9 Likes

Is it possible to fine-tune a question-answering model with your library? If so, can you provide an example? Thanks.

1 Like

@xjdeng As of now adaptnlp doesn’t having retraining capabilities for question answering, but it’s something we’re working on

If you want to deep dive more into the transformers library, it is possible to fine-tune and train a QA model with the transformers Trainer module (which adaptnlp uses). An example can be found here: https://github.com/huggingface/notebooks/blob/master/examples/question_answering.ipynb