Share your V2 projects here

On Saturday (yesterday), I added support for Image Classification using fastai2.vision. After training and exporting your model, you can:

  • upload your model
  • make predictions on the web and Python SDK. (iOS is Apple TestFlight only at the moment)
  • share the model with your friends (or anyone really) so they can use/see what you have been working on.

Today, I’ll create an example notebook, that takes you through all the steps…

Let me know if you’re interested…

Would love to hear what the community thinks!

2 Likes

As for myself, I was thinking about these two ideas:

  1. Classifier to detect machine vs human-generated text. This would involve creating a dataset myself by:

    • grabbing a selection of Wikipedia articles.
    • for each one of those, feed the first sentence to a GPT-2 (or whatever SOTA LM) and let it produce text.
    • the goal is to produce a good mix of fake/original and train a model on it
  2. Work on the dataset coming with the Kaggle Deepfake detection challenge. This would allow me to test fastai2 on video/audio.

Wdyt guys?

cc @lgvaz FYI this is what I came up with :wink:

4 Likes

If anyone would like to get their feet wet with audio classification, you can join me on a project here :blush:

For now I am using a pretrained CNN and training like we did in lecture 1.

I created a starter notebook that reads in the data using the DataBlock API and now the next step would be to run various models on the data and see what results you get :slight_smile:

If you could train on this data and push a notebook (showing how you went about this and your results) that would be awesome! :heart:

I am very excited about this and I think it can evolve into something really cool. Above all, I am genuinely interested what you think about this? Is there anything I can do to make this more interesting to you? Is there something you do not like about this? What do you like and would like to see more of? Shoot me with all you got :slightly_smiling_face:

I have never tried such a thing before (collaborating openly on a problem) but I am hoping we are onto something good here. There is a lot we can learn from one another. If I am mistaken though, please send me your feedback so that I can change my ways :wink:

Thank you! :pray:

14 Likes

If you are looking for an easy way to deploy and share your models, I’m super excited to share my SeeMe.ai quick guide for fastai v2.

It’s step-by-step process of deploying and sharing your trained models with friends and the rest of the world.

Have a look at the SeeMe.ai quick guides Github repo.

Absolutely looking forward to see what you build, deploy and share!

Thank you all!

2 Likes

@muellerzr I somehow got Devise working with fastai2, I used tiny imagenet dataset so it’s not that good in performance like the imagenet version from v2 course, but I need help with plotting, is their any direct way in fastai to plot validation set (dls.valid_ds) ? Or I have to write a function myself to do so?

2 Likes

Since I wasn’t super intimidated at all by the amazing SG projects and not at all because I was looking for a simple, 1-click setup and deploy idea:

I made a Not hot dog app using seeme.ai, thanks to @zerotosingularity for creating the platform!

Check the thread here

Testing:

Ok, that’s a hot dog!

And @muellerzr is not. Hmm :thinking:

I think I did some good homework :smiley:

12 Likes

I will have to see your training and validation set! :slight_smile:

SeeMe.ai provides the deployment and sharing in this case, the model is just your fast.ai model… :man_shrugging: (it always outputs one of the classes it was trained on, as you very well know :smiley: )

1 Like

That is awesome, would love to check it out when you are ready to share it :slight_smile:

1 Like

For plotting the validation show_results you could peak at. But otherwise yes the old lecturer (v2 of the course) and there’s the devise notebook which may be less building headache to port over

Are there any resources for finding available datasets? I am thinking of working on a bee classifier (similar to lesson 1 dog and cat classifier) and was wondering how/where I can find a dataset for bees?

Kaggle would be my first go to. There’s tons of datasets on there. (First result after a quick search: https://www.kaggle.com/jenny18/honey-bee-annotated-images)

1 Like

Thanks!

Hi everyone, I tried my best to rewrite the fastai2 version of the Devise implementation that Jeremy did using fastai v0.7 for v2 of the course, here you can see my implementation. I used Tiny Imagenet dataset from Stanford, it’s a subset of the Imagenet dataset contains 200 classes, images with shapes 64x64. I used this dataset because Imagenet is huge and I can’t work with that on Colab. I didn’t train from scratch as I thought it’s just a subset of Imagenet dataset and replacing classifier and training the last part itself should be enough. If anyone found any bug feel free to ping me, I’m still figuring out fastaiv2. I used the higher-level API here because I couldn’t get the Datablock to work(will switch If I figure out how).

5 Likes

Finally I finished building ImageSegmentation pipeline for a Kaggle challenge TGS Salt Identification. The solution should be able to get you to top 1 - 5%.

The solution is an update to my old repo, which is based on fastai 0.7.

Key Features of the notebook.

  • Creating DataBlock (Dataset, Dataloader)
  • Model
    • Create FastAI unet learner
    • Create a custom unet model demonstrating features like
      • Deep Supervision
      • Classifier branch
      • Hyper columns
  • Train on K-Fold
  • Ensemble by averaging
  • Loss function
    • Classifier loss
    • Loss for handling Deep supervision
    • Segmentation loss
  • TTA - Horizontal Flip
  • Create a Submission file.

I wanted to record a code walkthrough and post it soon here. Posting it here so that I do not escape from doing it. Planning to pick up another competition probably quickdraw and build a complete pipeline. If anyone wants to join me on the journey please let me know.

5 Likes

Posting some homework from Zach M’s class which implemented Jeremy’s lecture (pt2 lesson13 2018) on Style Transfer (Gaty 2015) in fastai_v2.

See repository for full write-up: https://github.com/sutt/fastai2-dev/tree/master/style-transfer-hw

I trained most of these models around Feb 10th with the work-in-progress v2 library. I went back and duplicated the work for one the models today: the API is still in place and everything works but the results got a lot better. Perhaps just lucky seed, but exciting to see improvements emerge when you haven’t done anything :slight_smile:

23 Likes

After learning about fine_tune and trying to explore it further, I found a paper on coral identification inwhich they used Keras and a ResNet 50 for ~300 epochs and got an accuracy of 83%. Using some of the techniques from the first lesson, chapter 6, and chapter 7 (Progressive resizing, Test-Time Augmentation, and Pre-Sizing) I was able to get 88% accuracy in just 9 epochs! Read about it here

Edit: sorry it was a 404 for a moment, briefly rearranged things on the site and it broke the link

5 Likes

I worked to get U-GAT-IT working with fp16. It takes in a picture of a person, and then maps it to an anime image. (Cyclegan training in fp16)
Everything is currently a work in progress, but here is the results and a WIP blog: (btw looking for job)
Yes, all of this was done in fastai2. I have been working on it since October.


Ouputs

Inputs:

Edit: Forgot to normalize images, also uploaded input example.

32 Likes

Hey! :wave: I happened to be learning about Auto Encoders when the invitation for this V2 course came in so I implemented three experiments in v2: https://github.com/jerbly/fastai2_projects. This was a good way to learn implementing simple pytorch models into v2 (small enough to run on CPU) and includes a custom batch transform class to add random noise for the Denoising Auto Encoder. :slight_smile:

2 Likes

This is amazing! but the link to Medium is broken right now.
Is this based off a particular lesson in the series?

Nope, completely my own project. I think the link doesn’t work unless you are logged into a medium account.