Share your work here ✅

image

I tried using super resolution, based on lesson 7, to restore 20+ years old aerial images. Instead of a crappifying function I used the old images as input.
It all became a Medium post: (link)

2 Likes

I used what I learned in lesson 1 to create 2 image classifications.

The first was a coyote vs fox classifier that got 100% accuracy using 25 images for each animal.

The second one was a classifier for car body styles such as sedans, coupes and convertibles. I used 2000 images, 250 for each of the 8 different car body styles. With resnet34, I got 62% and 63% after fine-tuning. With resnet50, I got 66% and 70% after fine-tuning. There were some terrible predictions though, like a convertible being misclassified as a truck and a minivan being misclassified as a convertible.

1 Like

Just done my first lecture 2 assignment. Deploy a web app. You can download image of Beer or Kvas and my trained model can tells you is it a beer or kvas on image.
Beer or Kvas? Also make short video how it works https://youtu.be/gYRYwhp9gYc

1 Like

This is a little bit different and I know we generally don’t look favorably on RL around here :wink: but I’ve always been curious about how AlphaZero and Monte Carlo Tree Search worked so I wrote a blog post about it: http://joshvarty.github.io/AlphaZero/

Video: https://www.youtube.com/watch?v=62nq4Zsn8vc
GitHub: https://github.com/JoshVarty/AlphaZeroSimple

In order to make things simple I looked at the easiest game I could find: Connect2. In Connect2 players alternate between placing stones on a board with the goal of getting two in a row:

image

Obviously this is super easy to win as Player 1, but it’s still an interesting game in that there is opportunity for wins, losses and draws.

The complete game tree is small enough that we can visualize it on a single page:

In my post I discuss the three main components of Alpha Zero:

  1. Value Network - A network that takes in a single game state, and outputs a single number. Our value network should output values near 1 if we’re going to win, -1 if we’re going to lose and 0 if we’re going to draw.

  1. Policy Network - A network that takes in a single game state and outputs a set of probabilities that suggest promising moves. The output from this network are referred to as “priors”. They are initial or “prior” probabilities that suggest good moves for the Monte Carlo Tree Search to explore. Later, MCTS will use these suggestions to simulate the games that result from taking promising actions.

  1. Monte Carlo Tree Search - An approach for building up the game tree from scratch. MCTS is guided by the value network and policy network but goes a step further and actually simulates the games that would result from a given move and how our opponent might respond. I created a visualization (with code!) that walks through the creation of the game tree:

8 Likes

Hey guys. I finished the first lesson a while ago and was thinking about how I should deploy my model. I looked into Render and found it fascinating. Although, I also kind of wanted to see if I could build a quickie desktop application locally just to see if my model works. Here is a link to the video. I hope some of you do find my little project useful. Willing to post the code if asked. Will post it to github soon though. Thanks!

2 Likes

I just finished writing and recording a tutorial on the fastai2 DataLoader, and how to easily incorporate it with NumPy/Tabular data as a simple example. Read more here: DataLoaders in fastai2, Tutorial and Discussion

3 Likes

Thanks for putting together this presentation! I have been curious about RL ever since seeing AlphaZero’s chess innovations. This is the first time I was able to understand the RL method. Your choice of a simple example game was a great help.

It’s interesting that the human chess skill set is roughly divided into tactics and positional strategy, while RL divides the game into tree search and whole-board value. Policy seems to combine the two into a decision.

Thanks again for the enjoyable tutorial.

P.S. Do you know that your youtube video ends rather…abruptly?

1 Like

Thank You for your work.

P.S. Do you know that your youtube video ends rather…abruptly?

Whoops, there’s actually a missing section that must have got lost when I exported the video or something. I’ll fix it today and re-upload later today. Thanks for the heads up!

2 Likes

I just deploy my second image classifier on Heroku.
https://china-green-tea.herokuapp.com/
It is tells you what type of the China green tea is it. It have only 3 types - Biluochun,Gunpowder green tea,Taiping Houkui. My idea was add 9 types of the green tea - but my model was not so good because of the images quantity and quality. Lesson learned - information for the training very important!

3 Likes

I have tested your model with two pictures, and your model predicted the correct answer.
Here is what I 've tested:


Wow! it works not even for me!
Thank you JonathanSum

Works nicely.

Hi everyone! After going through the 3rd lecture I decided to try making a model for handwritten letter recognition. I used the EMNIST letters dataset and after a little fine-tuning on the resnet34 model I was able to get ~92% accuracy.
Here’s the jupyter notebook

I also went a further step and created a React web app that allows the user to draw any alphabet and test the model. (I discovered that the drawn letters had to be in a large size / cover the drawing scaffold significantly for better accuracy)

You can check out the hosted application on this link
Here’s the app in action!

8 Likes

This is so cool ! Nice !

1 Like

This looks awesome!

1 Like

Hi I hope your having a wonderful day!
I agree with the points Pomo made, I also think you will also see more reinforcement learning models now you have given such a good example. :trophy:
I read your post and found it concise, informative and enjoyable. After I finish my current projects, I have a a reinforcement learning project in mind, which will be a little easier to complete now.

Cheers mrfabulous1 :smiley: :smile:

1 Like

Lesson 1. I don’t have expertise in any field so it was hard for me to decide what i want to recognize.
I was thinking about recognition venomous spiders in a place i live in but i never met these things so i don’t know how they look like and google shows different spiders. So i took Bald dataset from Kaggle lmao.
It wasn’t hard, i didn’t even tune my model, it is 99% accurate in recognition bald person or at least it says it is =). Kinda disappointed in myself :frowning:
Hope to come up with something more challenging for the next lessons.

2 Likes

I just read your blog , nice work !

https://kirankamath.netlify.app/blog/matrix-calculus-for-deeplearning-part1/

blog on matrix calculus