Show us what you’ve created with what you learned in fast.ai V2! It could be a blog post, a jupyter notebook, a picture, a github repo, a web app, or anything else. Some tips:
Probably the easiest way to blog is on using fastpages or medium.
The easiest way to share a notebook on github is to install the gist it extension . This will only be possible if you use a platform that supports jupyter extensions, such as GCP. Otherwise, you can create a notebook gist by clicking File->Download to get your notebook to your computer, and then follow the steps from this SO post :
I would also be interested in working on Multi-FiT. I’ve been thinking of trying it out for a future project. I’ll start by reading the paper thoroughly in any case.
Also, I forgot one more. I’m working on deploying my models via Starlette (so a REST API) for one of each type. I’m hoping to have it done in the next week or two with some detailed walkthroughs I have image classification models done so if anyone needs help with those feel free to ping me
I (with a large amount of help from @lgvaz) have style transfer implemented in v2 here for you to start with :
While I’m at it I guess I’ll also mention that repo too (yes I have many many projects). If you were wondering how to implement something, I’ve probably done it if not something extremely similar in this repository to help you get started!: https://github.com/muellerzr/Practical-Deep-Learning-for-Coders-2.0 it’s a collection of notebooks from my study group, but in it includes Pose detection (via keypoints), style transfer, efficentnet, Audio, and many other bits not done in the first course
PS: Don’t feel discouraged if your idea was there, please still do it! We all learn differently
I couldn’t recommend more that you all checkout @muellerzr repo and course, absolutely fantastic, it really helped me get familiar with fastai2
I’m building a library using nbdev to facilitate the use of fast style transfer (and anything related to feature loss really) take a look here. I tried to build as modular as I could, it was a surprise to me that at the end I could change the task of stylizing images to putting hats on cats with very few lines of code
I decided to go with the plant patology competition on Kaggle, which is part of a CVPR workshop on fine-grained visual categorization:
The data is quite small, so the competition seems very beginner-friendly.
This is the starter notebook with fastai v2, which uses almost nothing except what was in Lecture 1, and it still gives quite an impressive result very fast: https://bit.ly/2IVFwW3
My plan of attack (open to potential collaboration should I be able to squeeze things in ):
Update my “Finding DataBlock Nirvana” article to v2
HuggingFace Abstract Summarization integration into v2 (from DataBlock to Predictions)
HuggingFace NER integration into v2 (from DataBlock to Predictions)
I’ve already done the last 2 for v1 and have just figured out the DataBlock bits for #2 above (I think).
At some point I’d to work on a full hugging face integration packing that incorporates all the above and all the other HF bits into something like fastai.huggingface with various sub-packages for using their schedulers, tokenizers, models, etc…