Share your work here ✅

(Black and white) image colorizer in self-contained notebook:

I used this project to teach a few high school student to use deep learning. I think it was fun because once the project (4 days) was over, they could still run the network in Google colab for inference and even “colored” the trailer of movies like Casablanca and Schindler’s list.

I also learnt a few things in the process:

  • The network is just a UNet that predicts CbCr color components, it minimizes MSE. Skin tones, sky, vegetation, sea, etc. which has consistent colors, the network does a pretty good job; however for things whose color varies wildly the network “does its job” minimizing MSE and predicts things close to (0.5, 0.5) which in CbCr is gray. I am experimenting with GAN to make the network predict plausible colors not averages (grays).
  • I have two GPUs and even with 32 threads in my system the CPU was the bottleneck:

    Although I use turbo-jpeg flavored PIllow, a cursory inspection of sudo perf top reveals:

look at ImagingConvertRGB2YCbCr … it reveals a nice (albeit big in scope) opportunity for Fastai imaging models: most JPEGs are encoded in YUV colorspace with UV (equivalent to CbCr) components downsampled 2:1. When you open a JPEG file, the library (libjpeg or libjpeg-turbo) internally decodes YUV components, upscales UV if needed (most times) and then converts the result to RGB. In our case colorizer it’s a waste b/c we can just open the YUV natively and also make the Unet predict the native downsampled (2:1) UV components. For pretrained image models, it makes sense to do it in YUV nonetheless as shown here: https://eng.uber.com/neural-networks-jpeg/ - you get better NN and less CPU overhead processing. In Fastai it could be done by extending Image and training image models in the new colorspace, injecting 2:1 downsampled UV components in most modern architectures after the first 2:1 scaling.

14 Likes

Hi,
I work for an art studio that makes pottery plates with paintings on them. we had a website to sell the plates. on the site, each artist gets a page to introduce and sell her/his plates with her/his paintings. each artist has a unique style of painting but the plates are the same.
I wanted to make a recommender system based on image similarity of the plates.
I trained a resnet model and used hooks to get the activation outputs for each plate image.
then I used nearest neighbour algorithms to find similar images based on cosine metric.
as an example you can see the nearest image to this plate :

returns this:

as you can see the result are not bad at all!
the notebook is available in this github repo:Image similarity
(I would really appreciate your feedback)

14 Likes

I have taken my first MOOC in 2014, but fastai is a game changer for me. The level of participation is on a different scale. If fastai was so well-structured from the beginning, I probably would have learned less. Watching the evolution from v1 to v3, I have the chance to have a glance at how software is actually built

I have not accomplished anything big with deep learning yet, but I think this still worth a celebration.

100k view accomplished :slight_smile:

It all starts with a simple blog about how to setup fastai with GCP. Notice that there is a 2k view spike at 2018 Feb, it is all because of Jeremy’s retweet. Without this initial motivation, I probably would not continue to write.

I am very glad that I have written my first blog a year ago, it has a great impact on me including my way of working and mindset. I could not thank fastai enough. The more you dedicated, the more you learnt.

7 Likes

Just finished writing my first hands-on experience with FastAI and wrote a medium post about it.
This is about classifying malaria cells using CNN. I am just loving this library.
How I got a world-class Malaria classification model with FastAI within two hours.
Code can be found on my Github profile

1 Like

I made a simple widget for labeling image data.

It is heavily based on this text data labeler and the image cleaner from the fastai library. While it is not as polished as those, it does work!

Please try it out if you need to label some image data and let me know how I can make it better!

9 Likes

FreeCodeCamp asked me rewrite my project for medium. While it doesn’t have anything ground breaking for this group, it was a good experience in getting published online and hope it encourages you to submit your projects outside of the forums as well.

4 Likes

Hello!

A colleague referred me to get to know fastai and it is awesome! I brainstormed yesterday and was thinking about minority report, the way Tom Cruize interacts with the screens. Isn’t that the dream of every presenter? So I tried something small 3x25 imges of my pointing finger floating above my phone.

I created a pandas frame with a column for the filename and collumns for the approximate x and y positions. And just trained the thing to see what would happen. I expected somehow the (x,y) location to ‘stick’ with the image so that when I would predict with a new image, I would also get x and y parameters that corresponded to where my finger was.

It doesnt quite work that way :frowning: I get “Multicategory” and two 4 or 5 dimensioanl tensors. I tried to use their means but that was also not quite good. Anyway I had fun playing with this for some hours and if anyone has an idea to help out: please :blush:

regards, Jairo

https:github.com/jairobbo/interactive-presentation.git

1 Like

Hello :wave:t3:,

As I advanced in the course, I wanted to improve my Simpson’s fine-grained classifier . Indeed, the model would not work if there were multiple characters in the image, as it would only output the name of the character it was the most confident about.

So I built a multilabel classifier (check out the link at the end of the post) able to recognize several characters present in an image. The most difficult part of the project for me was the conception of the dataset (combining images & labelling), as the training process was fairly similar to what I have done previously.

I would also like to give a shout-out to @balnazzar who has helped me a lot for this project through his tips about dataset creation and the fastai library :+1:t3:

3 Likes

Deploy PyTorch Models to Production via Panini

I always had a tough time deploying my model using Flask + Gunicorn +Nginx. It requires a lot of setup time and configuration. Furthermore, inferring the model with Flask is slow and requires custom code for caching and batching. Scaling in multiple machines using Flask also causes many complications. To address these issues, I’m working on Panini.

Panini

https://panini.ai/ can deploy PyTorch models into Kubernetes production within a few clicks and make your model production ready with real-world traffic and very low latency. Once deployed in Panini’s server, it will provide you with an API key to infer the model. Panini query engine is developed in C++, which provides very low latency during model inference and Kubernetes cluster is being used to store the model so, it is scalable to multiple nodes. Panini also takes care of caching and batching inputs during model inference.

Here is a medium post to get started: https://towardsdatascience.com/deploy-ml-dl-models-to-production-via-panini-3e0a6e9ef14

This is the internal design of panini

Let me know what you guys think!

7 Likes

Hi,
I wanted to share with you my first blog post about my efforts building a semisupervised location to vector model based on this
Blog

They took about 2 weeks to train on data generated from openstreetmap. I made quite a few changes from their implementation and added support for mixed precision using plain pytorch. I wrote a blog post about it here: and you could find the source code here: Loc2vec

This picture below shows interpolation between two locations by interpolating the embedding and finding the nearest location.


This picture shows nearest neighbors based on image queried (first column)

I’ve done this in plain pytorch as I took this course when it was taught with keras and pytorch. I’d like to port it to fast.ai to use some of the learning rate / stochastic weight averaging / densenet etc. Specifically, I got stuck with data loading. ( I didn’t watch the latest version of the course, and the answer could be just watch lecture x).

This is my first blog post, any feedback is appreciated. Also, on the technical front, how would you insert a video (say images/tsne.mp4) in the github Readme.md?

7 Likes

Hi
I’ve always been fascinated by paintings and different styles of them. I’ve been working on a project to compare baroque paintings with ancient Greek pottery paintings.
Baroque art is characterized by great drama, rich, deep color, and intense light and dark shadows as opposed to Greek paintings where Figures and ornaments were painted on the body of the vessel using shapes and colors reminiscent of silhouettes.
I used a CycleGAN to experiment and see what features in each style were more important for the model to turn the paintings from one style to the other one.
as an example:


you can see the generator of baroque paintings tries to capture deep color contrasts and shadows features. but that’s a hard job since the Greek paintings usually represented as a solid shape of a single color, usually black, with its edges matching the outline of the subject. but the Greek generator almost captures the main features as dark shadows and solid edges.
The training took about 6hours on a single GPU. the notebook is available in the github repo: CycleGAN
I would really appreciate your feedback.

4 Likes

Don’t judge a book by it’s cover!
(Let my CNN do it for you :wink: )

I just finished my second project: training a resnet34 on 15 different classes of book covers (:blue_book::closed_book::green_book::notebook::notebook_with_decorative_cover::orange_book:) and I’m super excited to share my results! A few thousand images, a bit of data grooming and architecture tweaking, an hour of training, and it’s pretty stable around 45% accuracy! (Random guessing would be 7%.) I believe a good bit of this is due to me choosing somewhat ambiguous/overlapping classes.

And now for the fascinating results:

  • Easy: Romance Novels and Biographies have an unambiguous stand-apart style

  • Runners Up: Fantasy, Cookbooks, and Children’s Books are pretty straightforward, too
    fantasy cook child

  • Most Confused: Mystery x Thriller x Crime, and SciFi x Fantasy (hard to draw the line sometimes)

  • Hardest: SciFi turns out to be a mechanic more than content, and can scan as many subjects
    scifi

  • WTF: Western is a genre dedicated to tales of cowboys, but it can also crossover fabulously…
    unicorn

If anyone has suggestions for breaking through my personal accuracy asymptote, I’d love to chat!

6 Likes

I created a dog breed image classifier using lesson3-planet.ipynb as starter code and using Stanford Dogs dataset from Kaggle.

Give it a try with your dog pictures here: https://whatdog.onrender.com/

1 Like

Hi Maria,
Could you share val/train id’s? In order to compare results on same data split.
That would be great.
Thanks

Ever taken a photo, but struggle to come up with the perfect social media caption?

Meet WittyTourist
GitHub: https://github.com/DaveSmith227/witty-tourist

It’s a web app that gives you a witty caption when you upload a pic with a San Francisco landmark. The app detects the landmark in the photo (currently trained on 13 landmarks) and returns 1 of several pre-loaded captions for that landmark.

Enjoy the fun mock-ups with Danny Tanner and Nicholas Cage :selfie: :bridge_at_night: :laughing:

Building the dataset - I trained it with ~5,000 photos scraped from Instagram (and tediously hand-labeled…) and it achieves 97% accuracy on a separate test set (~1,000 images) scraped from Google.

Deployment - The app was deployed with Render which is SUPER EASY and updates immediately when you push new updates to your app’s GitHub repo - thank you @anurag!

Jupyter notebook (on GitHub link above) - Walks through the full training loop, how to scrape images, and how to build and load a separate test set.

Training tip - Start training with 128x128 sized images and then re-train with the same images sized as 256x256 to improve accuracy as shown by Jeremy in Lesson 3 (my validation accuracy went from 91% to 95% without overfitting from this helpful tip).

I love playing “tour guide” to friends/family visiting SF and this toy project served as a fun way to learn a variety of new skills (learning the fast.ai library, deploying an app, HTML/CSS, etc…) and bring a bit of joy to others. I also got inspiration from @whatrocks’s Clabby cousin app so thank you as well!

Let’s continue to share and inspire each other’s ideas :slight_smile:

11 Likes

I trained Stylegan on a portrait art dataset and was thought the results were decent, this was through transfer learning I was able to train on a k80 on colab much faster than from scratch. Here is the github repo and results

I trained it further on more modern art and this was the result

7 Likes

Hello everyone. Thank you for sharing all your work. Some of you are developing really inspiring applications.

To get a better understanding of the notebooks. I try as much as I can to apply these notebooks to different datasets. Sometimes by joining a Kaggle competitions sometimes by searching on https://toolbox.google.com/datasetsearch. Here are some of my efforts.

You can run all these examples directly on Google Colab.

Lesson 1: I made a classifier to determine if a specific paintings is from Rembrandt, Van Gogh, Leonardo of Vermeer.

Lesson 3: This time I built a NLP application to classify if a SMS message is Ham or Spam.
Thanks to this Kaggle Dataset (https://www.kaggle.com/uciml/sms-spam-collection-dataset)

Lesson 4: When you start with machine learning on Kaggle the challenge is to built your first model on the Titanic dataset https://www.kaggle.com/c/titanic. You have to predict which passengers survived. A Neural Networks didn’t give me the best results, but it was a nice exercise to play with the tabular notebook.

Lesson 5: This is a copy of a Kaggle Notebook https://www.kaggle.com/aakashns/pytorch-basics-linear-regression-from-scratch to get a better understanding of the pytorch basics (Loss, gradient descent, backpropagation), by building a linear regression model on a really simple dataset.

5 Likes

I have created a web app for demonstrating the capabilities of the Poisonous plant classifier model. I have deployed the web app on Heroku. Here, take a look: https://poisonous-plant-classifier.herokuapp.com
You can upload a picture and know whether the plant is one of the 8 categories of poisonous plants that the model can identify. I have used resnet18 due to the limitations of Heroku free. This model performed with a 93% accuracy on the test data. Here is the resnet18 kernel
https://www.kaggle.com/nitron/poisonous-plant-classifier-renset18
What do you think?
Next I am going to make the model predict a plant from a live video stream :slight_smile:

4 Likes

Super impressive project :slight_smile:

I don’t think you can insert a video into markdown, but might be wrong on that one. However there is a really nice way of including it in a jupyter notebook (along with a bunch of other media formats) in case at some point you will want to leverage notebooks to showcase your work: IPython.lib.display

So many amazing projects shared here :slight_smile: I think we are seeing a fastai explosion - so hard / impossible to keep up with what is happening these days :slight_smile:

Just wanted to share my whale repository now that it is completed :slight_smile:

It contains a bunch of stuff including:
:white_check_mark: training a classifier
:white_check_mark: training on bounding boxes (localization)
:white_check_mark: landmark detection
:white_check_mark: bounding box extraction
… and finally training a model that combines classification and metric learning (places in top 7% of a recent Kaggle competition)

From the perspective of being able to leverage fastai functionality, some of the notebooks do a better job, some worse. You can’t win it all :slight_smile: And in fact, I don’t mind venturing off the beaten track all that much. Sometimes doing things my way allows me to move faster (mostly because I am not that good with figuring out how some things are done in the library and can code up simple things rather quickly) but mostly because this approach is very good for learning.

What I really appreciated about this competition is the sense of 'hacking ’ on something that it reconnected me with. This is the sort of state of mind where you know how everything that you use works, you use simple building blocks and can change things up rather quickly.

Well, maybe knowing how everything works is not the right expression - I surely have no idea how augmentation is applied for example, nor do I have a particular willingness to know that. Its more about knowing what each building block does than how it does it. No surprises, simple behavior.

Going forward I would like to stick more closely to what the library provides but this feeling of ‘hacking’ on something is definitely something I will continue to look for in any personal project I work on. I think I would even be willing to trade performance for more of that feeling. My current thinking is that in the long run staying in this hacking state is actually a better predictor of success than initial results. But hey - not sure if I’ll have the same perspective on this 6 months from now.

Look forward to part 2 awesomeness that will ensue soon :slight_smile: and already have a couple of ideas for future projects, this time with even more fastai :smile:

EDIT: just wanted to clarify - I don’t use the library in only two places, in the Siamese notebook and the final one, and that is only for reading in data. As a matter of fact, as far as I am aware, fastai offers the best way of augmenting images currently available, and just yesterday realized you can apply the transformations to arbitrary data with ease… Hoping to share an NB on that in near future… For everything else I am using the library and it is only through is functionality that I was able to complete so much in record time (the training loop for instance has so many cool aspects you will not find elsewhere that I am only now learning about)

13 Likes