Share your work here ✅

I completed lesson 1 and used the lesson 2 download notebook.
I downloaded images of different types of racing car;

  • Formula 1
  • Formula E
  • Indycar

These cars look very similar and so I thought it would be a good challenge for resnet34 to classify.

image

After around an hour of work I got to around 80%

The result is not bad, but the data could be a lot cleaner. I also suspect that increasing size of dataset would help.

I did all the work on my custom built linux machine at home with a threadripper CPU and an Nvidia 1080.
I used a batchsize of 32.

2 Likes

I created a model which takes photographs of diseased eyes and categorizes them by disease. Error rate is still 25%, which is too high, but perhaps due to the fact that many diseases overlap i.e conjunctivitis will be present in orbital cellulitis almost always.

Looking forward to taking lessons 2+ to see how I can train with multiple labels per image.

The link now shows a 404 error. Can you please reshare?

Presenting “Wall Doctor”; an image classifier to identify common issues in wall painting such as blistering, patchiness and microbial growth.
The idea is for consumers to post images to a painting/home service solution and give the service provider a better understanding of the issue. Please go through the notebook and let me know what ive got right/wrong.
A question related to implementation : Supposing someone posts an image of their wall with some objects in the room, how do I seperate these seperate objects during training and classifying as my main object of concern is the wall in the room itself and no the chair/table/dresser in front of it?

Wall Doctor using FastAi

3 Likes

After watching Lesson 1, started building resnet34 for classifying Lamborghini, McLaren, and Jaguar Cars in google collab by creating a custom dataset from google images. Achieved quite low 50% accuracy without doing any changes.
Going through the forums helped resolve several issues. Tried resnet50 with bs=64, cleaned the dataset, run 8 epochs to achieve 80% accuracy. Further working with LR Finder to obtain the optimal learning rate and 4 epochs helped achieve 90% accuracy.
error%20rate


confusion%20matrix

These are the Links in the fast.ai forums
which helped me to fix the errors and move on from 50 to 90% accuracy. Github link

Quite excited to build a deep learning model on my own in a week’s time with 90% accuracy :grinning: Thank you for the vast info in the forums which are very much helpful for self-learning!!
Moving on to lesson 2 :grinning: :grinning:

3 Likes

For lesson 3’s homework had a go at segmentation using the data from the current iMaterialist (Fashion) 2019 at FGVC6 on Kaggle.

I thought I’d have a go at using a Kaggle kernel - here, with more detail, for anyone interested.

Time, compute and skill limitations meant I had to simplify the task. I used a smaller set of categories, smaller images and just 10% of the training set. Still got what I thought were pretty good results regardless - 88% accuracy (excluding background pixels) and in many cases very reasonable looking results (actual on left, predicted on right).

Really enjoying the practical side of the course, so I’m trying to keep up with doing a personal project for each week.

6 Likes

And how did you ended up with evaluating tabular model on a new dataset?
I’ve encountered with a similar problem and I’ve ended up writing my own bunch of functions to do that (you can see in this post Some useful functions for tabular models and in this Rosmanns’s data notebook https://github.com/Pak911/fastai-shared-notebooks )

2 Likes

Hey everyone ,

Please refer to the quoted post for my implementation of “Wall Doctor” a hypothetical consumer app on which users can upload images of their room walls to identify common issues such as microbial growth, patchiness,blistering etc. and get suggestions for solutions/remedies/products accordingly.
I am currently trying to setup an image segmentation model to ‘segment’ different objects in the room such as furniture, paintings, and the wall itself.

My question is : How do you use the output of an image segmentation model?
In the lecture Jeremy uses show_results to display the ground truth vs predictions but how would you use this output in an actual application? Can i get the pixel values of my different identified objects and return them accordingly?

1 Like

:heart_eyes: Love it

1 Like

@alvisanovari Love it - wish it were still up in production. https://cold-or-canker.now.sh/

link down?

1 Like

Hi @tank13
I am struggling to deploy my text generator.
Can you give me some advice on what service is easiest and maybe some example that I can re-use?

Thanks a lot in advance!

Hi everyone, I am currently on lesson 5 where Jeremy taught us building a N.N for mnist dataset from scratch using fastai library and pytorch. Here, I am sharing link to my .ipynb file which contains the code for N.N from scratch just using python and numpy. Hope, you’ll find it useful. This notebook is inspired by the 'Neural Networks and Deep Learning" course taught by Andrew NG. You can tap on the link and click “Open with collaboratory” to see the code
https://drive.google.com/file/d/1Sk9XC6OdC5gsIaYlTXPM5crTtGPqC2Ij/view?usp=sharing

In what form are you trying to deploy? I had a lot of trouble getting mine to work as a web app, since I wanted to use free services only. One I put on Zeit v1, though I believe that since they moved to version 2 the approach I used won’t work anymore. The other one I deployed using the Google App Engine, which was also a huge headache. I used GCP to train, and I built a docker image on my GCP instance and then deployed that, but it took multiple attempts before one succeeded and I think I was doing the same thing each time (I’m also super inexperienced when it comes to docker, so others might be able to deal with this better). But then some time later it just randomly stopped working, and I just haven’t gone through the steps again.

Sorry I don’t have anything more helpful to share! But broadly, I’d recommend going through the steps in the ‘production’ sections (e.g. https://course.fast.ai/deployment_google_app_engine.html) and dealing with specific problems as they arise. The process was definitely not smooth for me, but after a lot of googling and multiple tries, I got things to work… at least for a while!

Very interesting use case!

Bears, Beets, Battlestar Galactica

Unfortunately I took it down a while back. It’s not that hard to setup though, you can leverage the Lesson 1/2 (teddy bear?) notebooks.

Thanks,
I also deployed the image classifier in the past in GCP but I hardly remember how. Only that I found maaaaany errors that I fixed painfully one by one.
I will review the production sections. I don’t mind paying a small price if it makes things easier… so anyone reading this: recommendations (or better, examples!) are welcome :slight_smile:
Thanks a lot

You might want to have a look at this , in case you haven’t seen before. Might be useful for what you are asking. thanks

https://forums.fast.ai/t/image-segmentation-understanding-inference/45855/13

Who is the artist?

For lesson 1 assignment, I used kaggle artwork dataset of paintings by 50 artists and tried to classify the artist.

Results summary:

Resnet34 with 4 epochs on top layers gets 33.8% error rate.
4 epochs of fine tuning reduces the error to 21.5%

Resnet50 with 8 epochs on the top layer (no unfreeze) achieves 20% error rate
10 more on all layers (unfreeze) results in 15.5% error rate
Further training would probably continue reducing the error rate but I am not sure whether it is over fitting.

Interesting to see the error analysis

Notebook on github

3 Likes

For some reason, it says i do not have access to this topic :frowning_face: