Share your work here ✅

Hi
I’ve always been fascinated by paintings and different styles of them. I’ve been working on a project to compare baroque paintings with ancient Greek pottery paintings.
Baroque art is characterized by great drama, rich, deep color, and intense light and dark shadows as opposed to Greek paintings where Figures and ornaments were painted on the body of the vessel using shapes and colors reminiscent of silhouettes.
I used a CycleGAN to experiment and see what features in each style were more important for the model to turn the paintings from one style to the other one.
as an example:


you can see the generator of baroque paintings tries to capture deep color contrasts and shadows features. but that’s a hard job since the Greek paintings usually represented as a solid shape of a single color, usually black, with its edges matching the outline of the subject. but the Greek generator almost captures the main features as dark shadows and solid edges.
The training took about 6hours on a single GPU. the notebook is available in the github repo: CycleGAN
I would really appreciate your feedback.

4 Likes

Don’t judge a book by it’s cover!
(Let my CNN do it for you :wink: )

I just finished my second project: training a resnet34 on 15 different classes of book covers (:blue_book::closed_book::green_book::notebook::notebook_with_decorative_cover::orange_book:) and I’m super excited to share my results! A few thousand images, a bit of data grooming and architecture tweaking, an hour of training, and it’s pretty stable around 45% accuracy! (Random guessing would be 7%.) I believe a good bit of this is due to me choosing somewhat ambiguous/overlapping classes.

And now for the fascinating results:

  • Easy: Romance Novels and Biographies have an unambiguous stand-apart style

  • Runners Up: Fantasy, Cookbooks, and Children’s Books are pretty straightforward, too
    fantasy cook child

  • Most Confused: Mystery x Thriller x Crime, and SciFi x Fantasy (hard to draw the line sometimes)

  • Hardest: SciFi turns out to be a mechanic more than content, and can scan as many subjects
    scifi

  • WTF: Western is a genre dedicated to tales of cowboys, but it can also crossover fabulously…
    unicorn

If anyone has suggestions for breaking through my personal accuracy asymptote, I’d love to chat!

6 Likes

I created a dog breed image classifier using lesson3-planet.ipynb as starter code and using Stanford Dogs dataset from Kaggle.

Give it a try with your dog pictures here: https://whatdog.onrender.com/

1 Like

Hi Maria,
Could you share val/train id’s? In order to compare results on same data split.
That would be great.
Thanks

Ever taken a photo, but struggle to come up with the perfect social media caption?

Meet WittyTourist
GitHub: https://github.com/DaveSmith227/witty-tourist

It’s a web app that gives you a witty caption when you upload a pic with a San Francisco landmark. The app detects the landmark in the photo (currently trained on 13 landmarks) and returns 1 of several pre-loaded captions for that landmark.

Enjoy the fun mock-ups with Danny Tanner and Nicholas Cage :selfie: :bridge_at_night: :laughing:

Building the dataset - I trained it with ~5,000 photos scraped from Instagram (and tediously hand-labeled…) and it achieves 97% accuracy on a separate test set (~1,000 images) scraped from Google.

Deployment - The app was deployed with Render which is SUPER EASY and updates immediately when you push new updates to your app’s GitHub repo - thank you @anurag!

Jupyter notebook (on GitHub link above) - Walks through the full training loop, how to scrape images, and how to build and load a separate test set.

Training tip - Start training with 128x128 sized images and then re-train with the same images sized as 256x256 to improve accuracy as shown by Jeremy in Lesson 3 (my validation accuracy went from 91% to 95% without overfitting from this helpful tip).

I love playing “tour guide” to friends/family visiting SF and this toy project served as a fun way to learn a variety of new skills (learning the fast.ai library, deploying an app, HTML/CSS, etc…) and bring a bit of joy to others. I also got inspiration from @whatrocks’s Clabby cousin app so thank you as well!

Let’s continue to share and inspire each other’s ideas :slight_smile:

11 Likes

I trained Stylegan on a portrait art dataset and was thought the results were decent, this was through transfer learning I was able to train on a k80 on colab much faster than from scratch. Here is the github repo and results

I trained it further on more modern art and this was the result

7 Likes

Hello everyone. Thank you for sharing all your work. Some of you are developing really inspiring applications.

To get a better understanding of the notebooks. I try as much as I can to apply these notebooks to different datasets. Sometimes by joining a Kaggle competitions sometimes by searching on https://toolbox.google.com/datasetsearch. Here are some of my efforts.

You can run all these examples directly on Google Colab.

Lesson 1: I made a classifier to determine if a specific paintings is from Rembrandt, Van Gogh, Leonardo of Vermeer.

Lesson 3: This time I built a NLP application to classify if a SMS message is Ham or Spam.
Thanks to this Kaggle Dataset (https://www.kaggle.com/uciml/sms-spam-collection-dataset)

Lesson 4: When you start with machine learning on Kaggle the challenge is to built your first model on the Titanic dataset https://www.kaggle.com/c/titanic. You have to predict which passengers survived. A Neural Networks didn’t give me the best results, but it was a nice exercise to play with the tabular notebook.

Lesson 5: This is a copy of a Kaggle Notebook https://www.kaggle.com/aakashns/pytorch-basics-linear-regression-from-scratch to get a better understanding of the pytorch basics (Loss, gradient descent, backpropagation), by building a linear regression model on a really simple dataset.

5 Likes

I have created a web app for demonstrating the capabilities of the Poisonous plant classifier model. I have deployed the web app on Heroku. Here, take a look: https://poisonous-plant-classifier.herokuapp.com
You can upload a picture and know whether the plant is one of the 8 categories of poisonous plants that the model can identify. I have used resnet18 due to the limitations of Heroku free. This model performed with a 93% accuracy on the test data. Here is the resnet18 kernel
https://www.kaggle.com/nitron/poisonous-plant-classifier-renset18
What do you think?
Next I am going to make the model predict a plant from a live video stream :slight_smile:

4 Likes

Super impressive project :slight_smile:

I don’t think you can insert a video into markdown, but might be wrong on that one. However there is a really nice way of including it in a jupyter notebook (along with a bunch of other media formats) in case at some point you will want to leverage notebooks to showcase your work: IPython.lib.display

So many amazing projects shared here :slight_smile: I think we are seeing a fastai explosion - so hard / impossible to keep up with what is happening these days :slight_smile:

Just wanted to share my whale repository now that it is completed :slight_smile:

It contains a bunch of stuff including:
:white_check_mark: training a classifier
:white_check_mark: training on bounding boxes (localization)
:white_check_mark: landmark detection
:white_check_mark: bounding box extraction
… and finally training a model that combines classification and metric learning (places in top 7% of a recent Kaggle competition)

From the perspective of being able to leverage fastai functionality, some of the notebooks do a better job, some worse. You can’t win it all :slight_smile: And in fact, I don’t mind venturing off the beaten track all that much. Sometimes doing things my way allows me to move faster (mostly because I am not that good with figuring out how some things are done in the library and can code up simple things rather quickly) but mostly because this approach is very good for learning.

What I really appreciated about this competition is the sense of 'hacking ’ on something that it reconnected me with. This is the sort of state of mind where you know how everything that you use works, you use simple building blocks and can change things up rather quickly.

Well, maybe knowing how everything works is not the right expression - I surely have no idea how augmentation is applied for example, nor do I have a particular willingness to know that. Its more about knowing what each building block does than how it does it. No surprises, simple behavior.

Going forward I would like to stick more closely to what the library provides but this feeling of ‘hacking’ on something is definitely something I will continue to look for in any personal project I work on. I think I would even be willing to trade performance for more of that feeling. My current thinking is that in the long run staying in this hacking state is actually a better predictor of success than initial results. But hey - not sure if I’ll have the same perspective on this 6 months from now.

Look forward to part 2 awesomeness that will ensue soon :slight_smile: and already have a couple of ideas for future projects, this time with even more fastai :smile:

EDIT: just wanted to clarify - I don’t use the library in only two places, in the Siamese notebook and the final one, and that is only for reading in data. As a matter of fact, as far as I am aware, fastai offers the best way of augmenting images currently available, and just yesterday realized you can apply the transformations to arbitrary data with ease… Hoping to share an NB on that in near future… For everything else I am using the library and it is only through is functionality that I was able to complete so much in record time (the training loop for instance has so many cool aspects you will not find elsewhere that I am only now learning about)

13 Likes

hey @radek how can i reach to the level where i can customize the network architecture,optimizers, and all that we can do with other library out there…because it now seems to me just like a blackbox where i can tweak it only to some extent…

My guess is that part 2 will provide a lot of these answers :slightly_smiling_face: I have not touched optimizers apart from maybe changing hyperparams here and there since I started doing the courses, not sure there is any need for that.

For me, what I want, are just simple things - reading in the data how I would like to (images, tabular data, you name it), creating custom architectures and using whatever I want for labels, I think that takes me 99% along the way on my wish list :smile:

I think maybe understanding pytorch a little bit better helps to raise the curtain a little bit on what is happening… not sure.

Despite what I wrote above (maybe I was not very clear there) I no longer feel the need to understand everything… which is a very nice side effect of taking the fastai courses :slight_smile: I just want simple building blocks I can hook up together, train a model here and there, and move on with my life :slight_smile:

So to answer your question more fully - I think both reading custom data and modifying architectures is coming in part 2 :slight_smile: And if you can’t wait there is a lot of that in part 2 v2 and in the machine learning course (but this uses fastai v0.7 I believe).

I would venture a guess part 2 + practice would be a short and correct answer to your question :slight_smile:

4 Likes

thanks @radek

It seems to me that everything you would want to tweak is tweakable in fast.ai. I recently participated in my first Kaggle competition, and for most of it, I definitely felt out of my depth.

But what I did do was try to find out what other data scientists were doing and figure out how I was going to accomplish the same thing with fast.ai, and Kaggle is a great place to learn because many people (like @radek) are willing to share their process.

If you haven’t already gone through it, course-v3 (specifically Lesson 7) should be enough to get you to the point of building your own architectures. I downloaded the fast.ai source so I could go through it in my code editor and find out how the library does what it does. It’s really helping me become intimate with the library and better understand applied concepts.

Participating earnestly in a Kaggle competition and going through course-v3 allowed me to implement the neural network below from scratch, and I’m only getting started.

3 Likes

great work!
And thanks for your suggestion of replicating other kaggle kernels using fast.ai
i would try this on fashion-mnist to see how this works out…

1 Like

Built a project on finding similar images using the last layer of CNN and Locality-sensitive hashing for approximate nearest neighbors using Caltech 101 dataset. Here are some results(Upper left image is used to query other images)

Here is the link to the notebook.

3 Likes

Hello folks,
I attended fastai part 1(2019) Oct-Dec last year and I did a project Deep Visual Semantic Embedding Models for Mobile inspired from https://github.com/fastai/fastai/blob/master/courses/dl2/devise.ipynb using modified fastai 1.0.33.

My project involved using semantic information from word embeddings to augment visual deep learning models to correctly identify a large number of image classes including previously unseen classes.I used this work towards a course project at Stanford (AI Graduate Certificate) too.

Pets lessons (1 & 6) were super helpful in building the model and interpreting the results:

https://github.com/fastai/course-v3/blob/master/nbs/dl1/lesson1-pets.ipynb
https://github.com/fastai/course-v3/blob/master/nbs/dl1/lesson6-pets-more.ipynb

Github: https://github.com/swarna04/cs230
Stanford project report: http://cs230.stanford.edu/projects_fall_2018/reports/12449389.pdf

I presented this work at an internal Machine Learning conference at Adobe too and it was received very well. Thank you @jeremy and team for the amazing library and super useful docs.

Project Details
I used a lightweight mobile architecture SqueezeNet 1.1 to train a model with pretrained fastText word vectors to learn semantic relationships between image labels and map images into a rich semantic embedding space. The model takes in image input and provides a 300-D image feature vector output. Using efficient cross-platform similarity search library such as nmslib, the output feature vector can be used for image similarity search in model predictions, or for label prediction based on lookup of the nearest fastText word vector representation for the known image labels in dataset. The model can also be generalized for zero-shot use cases by performing nearest neighbor search in model predictions for the fastText word vector representation of the input text label.

This project observed that such visual-semantic models are able to perform image-to-image, image-to-text and text-to-image associations with reasonable accuracy while using less than 7% of disk space and training parameters as compared to bigger models such as Resnet34 (used as baseline).

16 Likes

I have collected 100 photos od Pikachu, Bulbasaur and Charmander each. And trained the model with 90 % accuracy and was able to classify the 3 starter pokemons from each other :smiley: :smile: feels great B-) Will make something better with time :smiley:

Hi @radek

Really appreciate your repo, your work really guide me along my study.

I started to learn deep learning on Dec 2018 (fastai part 1 on Jan). I also joined Humpback competition as practice, then I noticed resnet-50 for 5005 classes is far from ideal. Somehow I found your repo along the way, and it kind point me direction on the computer vision part. To understand what you have done, I dig inside fastai docs, to figure out how callback is working, how to get customized MAP5. Also, how to write customized ItemList (want to be able to show paired Siamese data), and how hook is working… and even more… (especially why ni*2 for the adaptiveconcatpooling :slight_smile:

To just understand your repo, I got my first PR to fastai, got my first kernel medal on kaggle…
I am currently still working on the Siamese network and get a Triplets dataset (anchor/positive/negative), but I start to understand the details, to solve multi-class problem (1000+ classes) (you can’t just use a pretrained resnet and hope it is working, that’s why you have bounding box, one-shot learning coming in)

Still got a lot to learn, but thank you very much!!! :+1:

13 Likes

I just finished Part 1, Lesson 2 and deployed a simple web app. It takes a link to an image of art and attempts to identify the artist. I’ve only trained it on a few artists right now so it’s a bit limited in its scope but was informative to work on nonetheless! Link is below.

http://artornot.akshairajendran.com/