Share your work here ✅

I used a pre-trained GAN to turn images into cartoons. Here are a couple before and after shots. I am about to turn it into a twitter bot although not sure how long it will last :smile:

jeremey-howard-kagglejeremey-howard-kaggle-p

rachelrachel-p

donald_trump4donald_trump4-p

28 Likes

more and more inspired by fast.ai, just published part 1 of a 3 part article on medium, big thanks to fast.ai for inspiring so many of us, I’ve also created some 3D animations of loss landscapes to illustrate the article and I’m planning a lot more on the 3D visualization area (pd: the loss values are on purpose scaled/exaggerated in the animation to help with visual contrast), here the article:


and some of the 3D animations I created for the article:

Im enjoying a lot part 1 of the 2019 DL course, Fast.ai totally rocks! :smiley:

29 Likes

Nice!

What approach / function did you use to generate training data? (assuming you used the same approach as in lesson 7)

Hi everyone,
For this week, I have shared an interview of another amazing fastai fellow: @alexandrecc
Dr. Alexandre has shared many great insights about the intersection of DL and medical tech, and his work.
Link to interview

Regards,
Sanyam.

6 Likes

I did not train a model. I used a pre-trained GAN. Specifically, I used CartoonGAN. Check this repo

3 Likes

Hi guys :grinning:

I was part of the original FastAI (Keras) course back in 2017 but wanted to do it again with PyTorch and the FastAI framework.

Just finished Lesson 1 and I wrote a little classifier for mosquito species identification.

I’m sharing my blogpost here.
Hope you find it interesting.

2 Likes

Hello sgugger

Can you elaborate on this? What is the proper way or example of how to get intermidiate features using fastai_v1. When do use remove()?

Regards

Great point ShawnE! Adding a more varied dataset including neutral poses where stance and gi feature more prominently could be a great way to go.
That’s great you’ve trained for so long, excellent. I’ve dabbled here and there in BJJ but location moves meant I often had to leave a club just as I felt I was getting somewhere. I hope to get back to it soon…
Thank you for the great feedback!

Shoe classifier is now live!

Based on the image classification exercise - created a simple men’s shoes classifier that lets one classify between a few types of men’s dress shoes. I put it on Github, deployed it on Render and it’s now live!
I can’t believe how easy it all was just following the instructions in the course resources!
And am going to live blog the entire course work @

Even made a pull request to one of the course’s docs.

I did have one last thought on your work on “counting” application (btw: I think it is pretty unique and creative). if you recast it as a regression task and after hard work, it is still overestimating on counts thats larger than the range you trained it on, it may mean something. 'Cos I wonder to a small extent, this is how animals get their “number” sense. It must be a primitive image recognition thingy? If you have connection with psychologist or biologists, try to see or read on experiments related. Maybe animals also tend to overestimate counts. I frankly don’t know how you could even “verify” this. But if this is true, then e.g. a bird may think there’s more food particles if you give it all at once, vs. doing it twice halving the size each time (if it can remember it).

Those who have already watched the part 1: I wrote this medium post to explain how I keep up with the development of AI. I recommend to read it because there is techniques that help to get the most relevant information easily. Don’t just wait part 2 but spend time learning yourself.

1 Like

I have been working on a classifier to identify dinosaur images. The challenge I set out to tackle is to build something that will be able to identify images from a variety of sources, whether they be paleontologist drawings or pictures of kids toys. I’d like to take this all the way to an app that I and parents like myself can use whenever our kids ask us “what’s that dinosaur?”

I was impressed that I was able to achieve 80% accuracy over 10 categories, while still having some errors in my data labels (revealed when looking at the top losses). I was also able to get 87% out of a resnet50 model, but that accuracy varied widely in between notebook runs. If anyone has time to look at the notebook maybe they can help me figure out why.

The notebook is posted on github here.

1 Like

I have worked on a deep learning twitter bot generating funny “inspirational” tweets.
It’s live and you can see it here: https://twitter.com/DeepGuruAI.

It uses ULMFiT and techniques from the course.
The code is open source (https://github.com/tchambon/DeepGuru), and the github page describes the different steps (training, generation and bot).

4 Likes

Great code @tomsthom (also similar work here https://github.com/DaveSmith227/deep-elon-tweet-generator). It seems this approach (language learner prediction) is not the best for text generation. Jeremy mentioned ‘beam search’ as one of the ‘tricks’ for this purpose. I haven’t seen an implementation of beam search in fastai. Anyone looked into that?

1 Like

The issue with beam search is that you would generate the most probable tweet each time. So if you have the same starting point, you will get the same tweet each time.

My approach is to generate a lot of tweets (with a different temperature and variety) and then use a second neural net to select the best one.
I am working on the second neural network (I am doing a manual filtering in the meantime, dropping around 30% of the tweets)

1 Like

Hi all :wave:t3:,

After unpacking the material from Lecture 1 and 2, I went back to my initial project of creating, in Google Colab, a model to recognize 40 characters from The Simpsons.

After cleaning the dataset and experimenting between different learning rates, I was able to improve the accuracy to 95% from 92%. :grin:

You can read more about this, in my blog post.

test

https://raimanu-ds.github.io/tutorial/can-ai-guess-which-the-simpsons-character-part-2/

2 Likes

Here’s the ref to Jeremy’s note (no further elaboration): Tricks for using language models to generate text
I believe you get the same output for fixed input (ex. translation). In language generation you can always change seed or starting text (ex. use a random word from vocab). Beam search should be better than ‘next token prediction’.
I don’t know how to implement beam search or GANs for language generation in fastai but I am interested in exploring it when available. I tried poetry generation earlier (using language model learner prediction - predict after xxbos or xxfld), results were not that impressive.

Trimmed News - Read a summarized version of the news
Trimmed News is an app for iOS and Android. Currently it is in beta stage but you can already read news from 8 different sources. Summaries are produced using state of the art model but some of the AI parts are done using Fastai. I’m going to improve it a lot in the following weeks and in many cases Fastai will be the choice because of it’s simplicity. I try to get beta users to test it so in case this might interest you go check the website and download the app.

https://trimmednews.com

1 Like

Hi all,
The new baby has taken a bit of time.

32%20AM

But I finally got some time to play around with medical images (from lesson 1) and put together both a youtube video and medium post. Feel free to check them out and provide feedback. The notebook is on GitHub.

Update:
Render is now up. https://kvasir-demo.onrender.com/ and I am down to 3%

Update 2:
It is a multi-classification problem. I will have to solve that later.
https://www.researchgate.net/publication/316215961_KVASIR_A_Multi-Class_Image_Dataset_for_Computer_Aided_Gastrointestinal_Disease_Detection

7 Likes

Cute! We need more baby pics on this forum!! :grin:

2 Likes