Share your work here ✅

We implemented a pretty amazing 2019 paper for image similarity / image retrieval using fast.ai. It’s of much lower complexity than other state-of-the-art methods (e.g. no triplet mining required), as fast to train as regular image classification DNNs, and has results which are on-par or better than the best previous published results.

Repository: https://github.com/microsoft/computervision-recipes/tree/master/scenarios/similarity

11 Likes

This is great. I love work that shows progress on low complexity solutions.

1 Like

Great work and super nice repo!

1 Like

Hi everyone,

Based on week 2 notebook, I made a mushroom classifier… only for the most 10 common mushrooms. about 85% accuracy on test data. Seems to work decent on random mushroom images you can get online! Any feedbacks are welcome!

what-is-this-mushroom.onrender.com

1 Like

Base on Lesson2, and it works pretty well(I used 1920x 1080, I think I am wrong because I may not need to use this kind of a large size of image):

It can even work on the grew up version of Shio without having a grew up version in the training set.


Gthub
data_set_that_I_built

deployed and please feel free to upload your image: https://github.com/JonathanSum/Deep-Projects/blob/master/Character_idenf_deploy.ipynb

2 Likes

Hi, Hyungue Lim.
It is pretty interesting. Although I hope the accuracy to be higher, I will use your project to identify whether it is a poison mushroom or not.

But bad. I hope you will build more model in the future.
I tried your model, and it works pretty good.

1 Like

Hello! Thanks for trying out!
Yes, I believe the accuracy could be higher with more data. Getting mushroom pictures was not as easy as I thought. There were not as much pictures of specific mushrooms I thought I would find and often, different mushrooms in a same picture.
It would be interesting to see how yours turn out to be!

1 Like

Great news for the liver transplant business! :smiley:

Joking aside, please check out my posts on deadly mushroom identification, and remember that many people naively believe that AI can do anything perfectly.

2 Likes

Hi

I just published my work on New York City Taxi Fare Prediction (Kaggle competition).

I used pure PyTroch for building a Tabular Model.

Take a look and pm for questions!

1 Like

Hello!

Just finished week 2 lesson and wanted to try something myself.
I have trained a guitar (acoustic or electric) classifier with images from google search and got a respectable 96% accuracy.
Then I proceeded to deploy it on a server with a basic frontend.

Some gotchas along the way:

  • Data cleaning is really important if your source is not very reliable (mislabeling, irrelevant images).
    Got a ~5% increase in accuracy just by doing that.
  • Following the first point, even with the great widgets included in the Lesson 2 notebook, cleaning is time consuming expecially with a large dataset.
  • When deploying remember to use the CPU only version of PyTorch if it’s only used for inference.
    Dropped its size from 700MB to 100MB, useful for environments with limited resources.
  • Found a useful and powerful library in FastAPI, based on Starlette, as it’s easy to setup and uses async/await.

The Jupyter notebook is almost unchanged from the structure of the Lesson 2 one.
You can try the app here: https://agilulfo.herokuapp.com/static/guitars/
Source code on Github: https://github.com/agilul/deep-learning

Thanks fast.ai for this great course, will follow through the next lessons available!

2 Likes

If you want more interperetability out of your fastai tabular models, I’ve ported over SHAP into fastai2:

image

10 Likes

I’ve now made this available via pip, you can do pip install fastshap! The documentation is over at muellerzr.github.io/fastshap

Here is a basic output from a decision_plot:
image

(big help from @nestorDemeure for the documentation and refactoring)

10 Likes

I’ve spent the last month or so exploring GANs (generative adverserial networks), and decided to write a detailed tutorial on training a GAN from scratch in PyTorch. It’s basically an annotated version of this script, to which I’ve added visualizations, explanation and a nifty little animation showing how the model improves over time.

Here’s the tutorial: https://medium.com/jovianml/generative-adverserial-networks-gans-from-scratch-in-pytorch-ad48256458a7

Jupyter notebook: https://jovian.ml/aakashns/06-mnist-gan

3 Likes

Thanks a lot for sharing this @JoshVarty!
The blog post and repo you’ve shared are excellent!
You’ve very clearly explained and demoed how you can use self-supervised learning using fastai v2. Great job!
There are still lots of questions to be answered, so please, keep sharing your insights.

I implemented manifold mixup (and a variant I call output mixup) for both fastai V1 and V2.

It works even better than input mixup and can be applied to arbitrary input types (and not just pictures):

(see the Mixup data augmentation thread for more informations and becnhmarks)

3 Likes

I created this repo recently. It’s a collection of Python tricks. Feel free to submit pull requests.

3 Likes

Hey guys!
I’ve been working on a project that was an intersection of two of my hobbies: astrophotography and deep learning. Specifically on denoising astropictures. And even more specifically on the so-called photon noise. This is a poisson distributed noise which is a dominant type of noise in light-starved astroimages.
I don’t have a presentable notebook yet, but the results I’m getting make me really excited. So for now please accept my verbal explanation :slight_smile:
My model is very close to plain vanilla UNET with pre-activations and some other tricks that seem to work well for image processing tasks (rather than classification). Mish activations and Spectral norm instead of bachnorm.
I used perceptual loss based on VGG but was taking activations before RELU for loss calculation purposes. This helped to remove large portion of grid artifacts
The model was trained on images from a single telescope/single camera combination with images from R,G and B filters. I wish I had much more diverse set of images, but even such limited dataset seems to be working for other telescope-camera combinations as well (though not as well).
The trck in training wasnot to use any augmentations on raw images as this will break down real noice characteristics of individual pixels. So I used only random crops. The train set was created by creating pair of images: raw unprocessed image from a CCD camera and a corresponding stacked image. Stacking multiple images reduces noise and improves S/N ratio. Then raw and stacked images were aligned by moving rotating the stacked image but having raw imaged untouched to preserve its characteristics.

Attached are three images
First one is an image from the same camera/telescope but taken through Ha filter (deep red, cuts light pollution, brings nebulas)
Second and third are from a different telescope/camera combination taken through Sulfur filter (very low information at this wavelength and S/N is pretty terrible).
I’m really happy with how it turned out.
Limitations:

  • This work has limited scientific applicability. It was designed to produce visually pleasing images, rather than scientifically usable ones.
  • The stars on second and third images look swollen compared to original. This is because the train dataset had oversampled images (stars spanning 3+ pixels in diameter), while the test dataset was on undesampled images (<3 pix FWHM). Expanding the trainset beyond one telescope/camera should help to alleviate problem to a some degree.
    I hope this makes sense. Please forgive my grammar/spelling.



6 Likes

un-classified house example


Just trained it in few hours. It is a Natural Disaster detector by satellite and does an evaluation on how much the damage in color and detector the damaged house on the map. Different color means different level of damage. Color box is the damaged house location. One interesting is 1 is no damaged and 0 is un-classified. And my model has no issue with defining un-classified houses and no damaged house.

2 Likes

Hello! Over the past few weeks I have been developing a bird sound classifier, using the new fastai v2 library!

You can look at my notebook here: https://github.com/aquietlife/fastai2_audio/blob/birdclef/nbs/bird_classifier.ipynb

I wanted to incorporate some of the fastai v2 audio library into this, but I wasn’t sure how best to do it.

The dataset I’m using is from the LifeCLEF 2018 Bird dataset, and I re-implemented the BirdCLEF baseline system into Jupyter notebooks with some refactoring done along the way with the fastai v2 library.

The basic idea of what I did was:

Take the dataset, and use the baseline system’s methodology of extracting spectrograms to get a large amount of spectrograms for each of the 1500 classes of bird species.

From there, I did the classic transfer learning technique of training my model against the spectrogram images, on a ResNet model pretrained on ImageNet. I got down to about a 27% error rate!

I just wanted to post this now as I begin to tie it up to see if anyone had any feedback or questions. I’m going to be presenting my work at Localhost, a talk in NYC on February 25th if anyone is around! I’ll be presenting fastai v2 and the audio library to a big audience, so hopefully it will get more people interested in the library :slight_smile:

I’m going to cross-post this in the deep learning audio thread as well. If anyone has any feedback or advice or is in need of more clarification please let me know! I wanted to post it to the community early to have a conversation around it as I developed it more :slight_smile:

6 Likes