I have been working on low light image enhancement using GAN and Perceptual/Feature loss. Got inspiration from Deoldify to try this up.
I will share the Github repo soon.
Simple crowd estimation using image regression on “Mall” data set.
https://www.kaggle.com/ferasoughali/crowd-estimation-fastai-v2
I have been researching the applications of CycleGANs (and related models) in pathology. I have written a paper that was presented at the ICML Computational Biology 2020 workshop:
I used fastai v2 and nbdev to accelerate my development. I collaborated with @ashaw and @danielvoconnor as part of the WAMRI initiative.
Recently, I have been working on refactoring my code into a package for unpaired image-to-image translation. Check it out here!:
Also, I will be presenting my research, talking about CycleGANs, and sharing code, TOMORROW August 28th 9am PDT:
Check it out!
17 and 2nd yr PhD??? Bro…
Amazing! Looking forward to this!
Is this going to be recorded?
Yes it will be available afterwards as a YouTube video at the same link.
Hi All,
I tried to generate Face sketch with Super resolution, Self attention and GAN. Here are some of the results that I got,Please have a look.
Thank you Jeremy for the wonderful course.
Made a very basic dog classifier based on the knowledge of Lessons 1 & 2. It can identify 17 different breeds of dogs. I used RESNET 101 architecture and 4 epochs with data set size of 150 images per breed to train the model with approx. 96% accuracy on validation set.
Find github repo here
Try it here(It’s bit slow maybe due to heavy architecture, please bear with it
Hello
I did the same for butterflies…
Hello all! I wrote a blog post describing a little bit of playing around I did with the movie review / NLP sentiment analysis at the end of lesson 1. Some surprises were found, e.g., the model thought ‘I disliked the movie’ and ‘I hated the movie’ were positive reviews!
@vijish, where is code to generate face sketch ?
would luv to try it
cheers
sid
How to identify ‘data drift’ a.k.a ‘Out-of-Domain Data’ a.k.a ‘is my test set distributed in same way as training set’, with image data, using fast-ai and alibi-detect.
Just a few days ago, a new (getting started) competition on unpaired image-to-image translation was launched on Kaggle. To demonstrate my package, I have made a quick kernel:
https://www.kaggle.com/tanlikesmath/upit-a-package-for-unpaired-img2img-translation
Let me know what you think! Hope it is helpful!
I wrote a quick article on how to use Streamlit to quickly test your models. This can be used for fastai models as well. Link below
https://towardsdatascience.com/streamlit-use-data-apps-to-better-test-your-model-4a14dad235f5
I’ve written a guide on how to do cross-platform inference using fast.ai models. This seems to come up often in this forum, so I’ve written how to export your model using ONNX (instead of pickle), and then using it on a completely different platform (in this case .NET Core, but it generalizes to other platforms and languages)
I made a simple model to differentiate between Coral Snake and Kingsnake.
Try it out!
I’ve updated our quick guides to deploy your computer vision model on seeme.ai.
Uploaded models automatically get converted to ONNX and Core ML (for iphone/iOS).
You can use your model on iOS (Apple App Store link) and Android (Google Play Store link), as well as the web to make predictions, as well as share it publicly or privately.
I made an image classifier of Malaysian rice cuisine (since I live in Germany and can’t visit Malaysia for now) based on the bear classifier example in the course.
A short demo video: https://www.loom.com/share/0667e10601724d5b85c3670e54562eee
Scripts and Binder link are available at my Gitlab repo: https://gitlab.com/nghweigeok/fastai_malaysian_rice_image_classifier