Share your V2 projects here

Awesome Molly!!

Excellent intuition! This is very similar to what the new electra algorithm is doing! And they achieve state of the art results!!

I’ll trying to implement electra on fastai on the following week, maybe we can collaborate =)

3 Likes

Ah! Didn’t know about that! Looks really cool

1 Like

Been working on 2 projects.

Pupil & Visible Eye Segmentation

This work is a collaboration with Verily, to help quickly diagnose dry eye syndrome using pupil and visible-eye-area segmenation . Also useful for a lot of other similar use-cases they have in mind (including replacing the peripheral vision test).

I cannot share the code / snippets yet, but the results look very promising using fastaiv2.

Bokehlicious Selfies

I was experimenting with PyTorch's pre-trained models, and wrote a quick and dirty pipeline for adding bokehs to selfies. The results look pretty good. If you have ideas on how to improve this further, lets talk :slight_smile:

http://rahulrav.com/blog/bokehlicious.html

11 Likes

Following the advice of @sgugger , I’ve reproduced lesson 2 on another dataset. I used bing image search to get images of the two main types of masks that are being used to protect against COVID-19 ie. N95 respirators and surgical masks. I built and deployed the model on SeeMe.ai. Model achieves an error rate of 0.02.

Gist is at: https://gist.github.com/asiedubrempong/7d56514519af3d531dc6038d4329e6f4
Github repository: https://github.com/asiedubrempong/fastai-experiments#covid-19-masks

I thought of home made masks but I couldn’t get data on that.

7 Likes

Excellent article: informative and concise. I enjoyed reading it. Looking forward to seeing your implementations in fastai2. Thank you for sharing.

1 Like

I’ve been working to put fastai learners into production locally. Adding in some OpenCV and Flask to make this script work asyncronously, see this git repo here.
app1
[https://github.com/sutt/fastai2-dev/tree/master/chess-classification-hw]

Too bad my model doesn’t really work :roll_eyes:. Help me out on the forum here if you have any advice doing transfer learning on image classifiers.

Hello everyone I was feeling a little intimidated with all the amazing project around here since they all look super cool and amazing.

I’m a huge fan of video games and there is a new game that is coming out next week and I got really excited about it so I decided to create a “Resident Evil Monster Classifier” It includes 4 different types of monsters:

  • Licker
  • Nemesis
  • Hunters
  • Zombies

Here is a link to the github repo

Here is an example of the results:

Sucess!!! :slight_smile:

3 Likes

Was any one successful at adding the GitHub token to the Jupyter Nbextensions Configurator ? I was able to get the nbextensions, but I don’t really know how to add the GitHub token there.

Updated a v1 project to v2. Model trained beautifully but having trouble with the new headers (?) that fastai v2 implements. That’s blocking me from converting the model to Apple’s Core ML format

Notebook (WIP):
https://www.kaggle.com/raymonddashwu/fastai-v2-sugar-vs-tahoe?scriptVersionId=31149139

Repo:

6 Likes

This isn’t a fancy project but a simple blog post to help me start digging into the library and setting up a computer vision project. Hopefully I’ll get started on my larger project towards the middle of the course… it feels a bit to daunting at the moment/not sure where to start but here’s something to help keep me going :slight_smile:

3 Likes

Warning, not for the faint of heart

I hate bug bites, am paranoid about bed bugs, and occasionally get flea bites from my dog (at least that is my assumption) When I do get bit, I sometimes find myself trying to figure out what kind of bite it is. So to save my future self some time, I did some queries on bing image search and started classifying some pretty gross images.

https://colab.research.google.com/drive/15Pm1PcFmVyO04b3mjw0vU9YAMRfvtzOn (WIP)

I’m currently around 58% accuracy and hope to improve that with other data points (location of body and part of the world) and better loss function.

1 Like

Hmmm probably not going to open that link :sweat_smile: But one thing to consider if you haven’t already is how different people react to the same type of bite, some might be more allergic than others. Also the different reactions for different skin colors will appear differently I guess…super interested to hear how you get on though!

Hi,

I have created a near perfect nuclei segmentation model for microscopic slide images, using just 25 images (and masks), from the dataset and code.
Sample input:
image

The model was able to predict all the nuclei on an unseen image, close to perfection.
Input:
image
Prediction:
image

10 Likes

thanks, @morgan. how allergic one is tough and don’t think I have the data to solve :frowning:

the skin color thing is super important and (i think) solvable. I’m glad you mentioned. if I can stomach it, I’ll run the images through OpenCV contours or convert to greyscale and then try retraining.

1 Like

Does anyone have experience scraping google maps/google street view images and then doing deep learning to identify street signs? Any advice on where to start with this (either the scraping side or the deep learning model selection side) would be welcome.

i’ve not done it and only can offer general thoughts - i googled a bit and found this google_streetview — google_streetview 1.2.9 documentation . there might be better with some more exploration

I have been playing about with training a Danbooru Multi Classifier using V2. So I can use it in colorising and semi super vised projects.

epoch train_loss valid_loss accuracy_multi accuracy_multi time
0 0.016210 0.014924 0.995065 0.996054 6:10:34
1 0.015684 0.015714 0.995111 0.995989 6:05:17

accuracy mult threshold of 0.2 and 0.4, so I am gonna say my training is going well.

There you go, here my first fastai2 blog post!

5 Likes

Hi,

I have created a superresolution mobile app based on fastai’s superres notebook.

I didn’t do any interesting deep learning work here (although I’d love to implement this zero-shot learning method in the future!), my main work was on the deployment and demo side: I created both a web-service and a mobile app that enables to take/upload a picture, enhance it, and compare the new vs old. Hopefully, this code can also be reused to demo many of the other cool projects that are shown here (style transfer, semantic segmentation, etc…)!

Unfortunately, the server seems to crash with heavy pictures, so I restricted it to pictures <2MB. If you have any idea how I could solve this issue, please let me know !

4 Likes