Hi All,
I’ve been working on an experimental library called AdaptNLP that uses Flair and Transformers to streamline the training, inference, and deployment processes of a ULMFiT (transfer learning) approach with the latest state-of-the-art pre-trained language models.
A couple of features include:
-
Easy-to-use API for running batch inference on state-of-the-art NLP models like:
- Text/Sequence Classification
- Token Tagging
- Span-based Question Answering
-
A ULMFiT approach to fine-tuning Transformers language models and training your own NLP-task Classifiers
- Finetune language models for BERT, ALBERT, GPT2, etc.
- Train classifiers that can be loaded into the above mentioned API
-
Deploy open pre-trained or custom-trained models as a microservice
- Uses FastAPI
- Two-line steps to deploy Docker
- GPU compatible
The library is here: https://github.com/Novetta/adaptnlp
and it is available on pypi to be installed with pip install adaptnlp
.
Please feel free to try it out!
Due to being in its early development stages, feedback and issue threads would be very much appreciated in the AdaptNLP repo.
Thanks!