I’ve written a little sublibrary called fastai_minima. As the name implies this is the minimal fastai needed to get the Callback system, Learner, and the param groups working with raw Pytorch. As a result the only real requirements are fastcore and fastprogress. Hopefully this can help some folks who are working with raw torch, or want a minimal version of the library to work off of
This project will forever be frozen on fastai v 2.2.6’s Learner/Callback/Optimizer code, unless some drastic and major changes arise
Hi everyone, I created a step by step tutorial about integrating fast.ai with react and flask. This tutorial also builds a web application that enables login by face recognition.
Flask is one of my favourite frameworks, as it’s very usable and the basics of the framework are not to difficult to understand. Also I think it is easier than Starlette and it has a large following. This will help a lot of people to integrate fastai with flask and javascript.
Most of the datasets are noisy and incorrectly labelled. Sharing an example notebook to show case how to detect and clean noisy labels in the disaster tweets text dataset [1] can improve the accuracy(0.79 -> 0.85). This is done after detecting and removing the noisy labels using cleanlab [2]. For more information, refer Confident Learning [3]
I have started a blog with fastpages. My first post is about Noisy Imagenette, a version of Imagenette that comes with noisy labels. A training baseline is provided here
Here is my Bear Classification Project. It isn’t unique but I worked through all kinds of errors to get it up and running this past week so I have to just put it out there!
Hey guys!
Using the knowledge I got from Colaboratory Filtering chapter. I decided to go ahead and create a book recommendation app and deployed it on Streamlit. The app also automatically gives the links to the suggested books that can be opened in GoodReads.
After releasing my (undocumented) ULMFiT notebooks a while ago, I received a couple requests and questions about pertaining a language model from scratch. So I created a well documented repo on how to train ULMFiT models with SentencePiece. it covers the following steps:
Preparing a wikipedia dump (in a docker container to avoid the hassle with wikiextractor)
pretrain a Language model on Wikipedia
Fine tune the LM on the downstream corpus
Train a classifier
Make predictions using the classifier / interpretation with fastinference
I also provide a pretrained German model.
Everyone talks about BERT and Transformers but for classification ULMFiT is still very close to SOTA (e.g. F1 score unoptimized ULMFiT 52,54 vs BERT 53.59 on GermEval2019). Fine tuning + training the classifier takes about 30 minutes.
Right now I’m trying to get SHA-RNN (https://arxiv.org/abs/1911.11423) integrated with fastai. So if someone would like to join me, please get in touch :).
This is my Poke-App. I found ~10000 images of pokemon from the first generation, and trained it in Pytorch originally. I then did it in FastAI and it was much faster and more pleasant. Please feel free to try it out. I was getting ~91% accuracy on first generation pokemon.
@lukew Much appreciated! I should give credit to the awesome dataset I found on Kaggle. I had to use resnet101 to get much above 92%… I accidently deleted the notebook, but fastai made this so much more compact than the pure pytorch one I had built earlier. My kids have been testing pictures of politicians, pokemon, and themselves. It seems to guess starmie or machamp for most people.
Hi all,
Inspired by this, I made a sample notebook that shows you how to train raw Pytorch models with pytorch datasets and dataloaders on multi core TPUs on Colab using fastai and the fastai_xla_extensions packages to provide the training loop.
Yes, Tried 18, 34 and 50 and wasn’t happy with the accuracy of them. 101 got me above 91%. I’ve considered the null category – especially when I passed in a picture of a certain senate minority leader and it delivered the result of a Chancey. Just need some time. I’m doing this mostly at night and my youngest hasn’t always been cooperative!
I’d like to also test thresholds for determination and do an AUC for that.