I created a starter notebook that reads in the data using the DataBlock API and now the next step would be to run various models on the data and see what results you get
If you could train on this data and push a notebook (showing how you went about this and your results) that would be awesome!
I am very excited about this and I think it can evolve into something really cool. Above all, I am genuinely interested what you think about this? Is there anything I can do to make this more interesting to you? Is there something you do not like about this? What do you like and would like to see more of? Shoot me with all you got
I have never tried such a thing before (collaborating openly on a problem) but I am hoping we are onto something good here. There is a lot we can learn from one another. If I am mistaken though, please send me your feedback so that I can change my ways
@muellerzr I somehow got Devise working with fastai2, I used tiny imagenet dataset so it’s not that good in performance like the imagenet version from v2 course, but I need help with plotting, is their any direct way in fastai to plot validation set (dls.valid_ds) ? Or I have to write a function myself to do so?
I will have to see your training and validation set!
SeeMe.ai provides the deployment and sharing in this case, the model is just your fast.ai model… (it always outputs one of the classes it was trained on, as you very well know )
For plotting the validation show_results you could peak at. But otherwise yes the old lecturer (v2 of the course) and there’s the devise notebook which may be less building headache to port over
Are there any resources for finding available datasets? I am thinking of working on a bee classifier (similar to lesson 1 dog and cat classifier) and was wondering how/where I can find a dataset for bees?
Hi everyone, I tried my best to rewrite the fastai2 version of the Devise implementation that Jeremy did using fastai v0.7 for v2 of the course, here you can see my implementation. I used Tiny Imagenet dataset from Stanford, it’s a subset of the Imagenet dataset contains 200 classes, images with shapes 64x64. I used this dataset because Imagenet is huge and I can’t work with that on Colab. I didn’t train from scratch as I thought it’s just a subset of Imagenet dataset and replacing classifier and training the last part itself should be enough. If anyone found any bug feel free to ping me, I’m still figuring out fastaiv2. I used the higher-level API here because I couldn’t get the Datablock to work(will switch If I figure out how).
Finally I finished building ImageSegmentation pipeline for a Kaggle challenge TGS Salt Identification. The solution should be able to get you to top 1 - 5%.
The solution is an update to my old repo, which is based on fastai 0.7.
Key Features of the notebook.
Creating DataBlock (Dataset, Dataloader)
Model
Create FastAI unet learner
Create a custom unet model demonstrating features like
Deep Supervision
Classifier branch
Hyper columns
Train on K-Fold
Ensemble by averaging
Loss function
Classifier loss
Loss for handling Deep supervision
Segmentation loss
TTA - Horizontal Flip
Create a Submission file.
I wanted to record a code walkthrough and post it soon here. Posting it here so that I do not escape from doing it. Planning to pick up another competition probably quickdraw and build a complete pipeline. If anyone wants to join me on the journey please let me know.
I trained most of these models around Feb 10th with the work-in-progress v2 library. I went back and duplicated the work for one the models today: the API is still in place and everything works but the results got a lot better. Perhaps just lucky seed, but exciting to see improvements emerge when you haven’t done anything
After learning about fine_tune and trying to explore it further, I found a paper on coral identification inwhich they used Keras and a ResNet 50 for ~300 epochs and got an accuracy of 83%. Using some of the techniques from the first lesson, chapter 6, and chapter 7 (Progressive resizing, Test-Time Augmentation, and Pre-Sizing) I was able to get 88% accuracy in just 9 epochs! Read about it here
Edit: sorry it was a 404 for a moment, briefly rearranged things on the site and it broke the link
I worked to get U-GAT-IT working with fp16. It takes in a picture of a person, and then maps it to an anime image. (Cyclegan training in fp16)
Everything is currently a work in progress, but here is the results and a WIP blog: (btw looking for job)
Yes, all of this was done in fastai2. I have been working on it since October.
Hey! I happened to be learning about Auto Encoders when the invitation for this V2 course came in so I implemented three experiments in v2: https://github.com/jerbly/fastai2_projects. This was a good way to learn implementing simple pytorch models into v2 (small enough to run on CPU) and includes a custom batch transform class to add random noise for the Denoising Auto Encoder.