I’ve been working on an experimental library called AdaptNLP that uses Flair and Transformers to streamline the training, inference, and deployment processes of a ULMFiT (transfer learning) approach with the latest state-of-the-art pre-trained language models.
A couple of features include:
Easy-to-use API for running batch inference on state-of-the-art NLP models like:
Text/Sequence Classification
Token Tagging
Span-based Question Answering
A ULMFiT approach to fine-tuning Transformers language models and training your own NLP-task Classifiers
Finetune language models for BERT, ALBERT, GPT2, etc.
Train classifiers that can be loaded into the above mentioned API
Deploy open pre-trained or custom-trained models as a microservice
Uses FastAPI
Two-line steps to deploy Docker
GPU compatible
The library is here: https://github.com/Novetta/adaptnlp
and it is available on pypi to be installed with pip install adaptnlp.
Please feel free to try it out!
Due to being in its early development stages, feedback and issue threads would be very much appreciated in the AdaptNLP repo.
Hi @aychang! Is the adaptnlp library still actively maintained?
I tried to install it, and I noticed a few failures right away when running the nbdev_test_nbs command.
I was wondering if that is a known issue and/or if it’s worth spending time looking into it.