I created a model which takes photographs of diseased eyes and categorizes them by disease. Error rate is still 25%, which is too high, but perhaps due to the fact that many diseases overlap i.e conjunctivitis will be present in orbital cellulitis almost always.
Looking forward to taking lessons 2+ to see how I can train with multiple labels per image.
Presenting “Wall Doctor”; an image classifier to identify common issues in wall painting such as blistering, patchiness and microbial growth.
The idea is for consumers to post images to a painting/home service solution and give the service provider a better understanding of the issue. Please go through the notebook and let me know what ive got right/wrong.
A question related to implementation : Supposing someone posts an image of their wall with some objects in the room, how do I seperate these seperate objects during training and classifying as my main object of concern is the wall in the room itself and no the chair/table/dresser in front of it?
After watching Lesson 1, started building resnet34 for classifying Lamborghini, McLaren, and Jaguar Cars in google collab by creating a custom dataset from google images. Achieved quite low 50% accuracy without doing any changes.
Going through the forums helped resolve several issues. Tried resnet50 with bs=64, cleaned the dataset, run 8 epochs to achieve 80% accuracy. Further working with LR Finder to obtain the optimal learning rate and 4 epochs helped achieve 90% accuracy.
Quite excited to build a deep learning model on my own in a week’s time with 90% accuracy Thank you for the vast info in the forums which are very much helpful for self-learning!!
Moving on to lesson 2
I thought I’d have a go at using a Kaggle kernel - here, with more detail, for anyone interested.
Time, compute and skill limitations meant I had to simplify the task. I used a smaller set of categories, smaller images and just 10% of the training set. Still got what I thought were pretty good results regardless - 88% accuracy (excluding background pixels) and in many cases very reasonable looking results (actual on left, predicted on right).
Please refer to the quoted post for my implementation of “Wall Doctor” a hypothetical consumer app on which users can upload images of their room walls to identify common issues such as microbial growth, patchiness,blistering etc. and get suggestions for solutions/remedies/products accordingly.
I am currently trying to setup an image segmentation model to ‘segment’ different objects in the room such as furniture, paintings, and the wall itself.
My question is : How do you use the output of an image segmentation model?
In the lecture Jeremy uses show_results to display the ground truth vs predictions but how would you use this output in an actual application? Can i get the pixel values of my different identified objects and return them accordingly?
Hi @tank13
I am struggling to deploy my text generator.
Can you give me some advice on what service is easiest and maybe some example that I can re-use?
Hi everyone, I am currently on lesson 5 where Jeremy taught us building a N.N for mnist dataset from scratch using fastai library and pytorch. Here, I am sharing link to my .ipynb file which contains the code for N.N from scratch just using python and numpy. Hope, you’ll find it useful. This notebook is inspired by the 'Neural Networks and Deep Learning" course taught by Andrew NG. You can tap on the link and click “Open with collaboratory” to see the code https://drive.google.com/file/d/1Sk9XC6OdC5gsIaYlTXPM5crTtGPqC2Ij/view?usp=sharing
In what form are you trying to deploy? I had a lot of trouble getting mine to work as a web app, since I wanted to use free services only. One I put on Zeit v1, though I believe that since they moved to version 2 the approach I used won’t work anymore. The other one I deployed using the Google App Engine, which was also a huge headache. I used GCP to train, and I built a docker image on my GCP instance and then deployed that, but it took multiple attempts before one succeeded and I think I was doing the same thing each time (I’m also super inexperienced when it comes to docker, so others might be able to deal with this better). But then some time later it just randomly stopped working, and I just haven’t gone through the steps again.
Sorry I don’t have anything more helpful to share! But broadly, I’d recommend going through the steps in the ‘production’ sections (e.g. https://course.fast.ai/deployment_google_app_engine.html) and dealing with specific problems as they arise. The process was definitely not smooth for me, but after a lot of googling and multiple tries, I got things to work… at least for a while!
Thanks,
I also deployed the image classifier in the past in GCP but I hardly remember how. Only that I found maaaaany errors that I fixed painfully one by one.
I will review the production sections. I don’t mind paying a small price if it makes things easier… so anyone reading this: recommendations (or better, examples!) are welcome
Thanks a lot
For lesson 1 assignment, I used kaggle artwork dataset of paintings by 50 artists and tried to classify the artist.
Results summary:
Resnet34 with 4 epochs on top layers gets 33.8% error rate.
4 epochs of fine tuning reduces the error to 21.5%
Resnet50 with 8 epochs on the top layer (no unfreeze) achieves 20% error rate
10 more on all layers (unfreeze) results in 15.5% error rate
Further training would probably continue reducing the error rate but I am not sure whether it is over fitting.