After watching lessons 1-4, I decided to make a web app that classifies text according to the programming language. I searched the web, but couldn’t find much research on this topic. There was one example (https://arxiv.org/pdf/1809.07945.pdf) that uses a Multinomial Naive Bayes (MNB) classifier to achieve 75% accuracy, which is higher than that with Programming Languages Identification (PLI–a proprietary online classifier of snippets) whose accuracy is only 55.5%.
I was able to reach about 81% accuracy(according to fastai) (although I’m not sure I’m measuring it the same way as the paper) after following the same basic steps as the IMDB example. This was done with the dataset I found from the author of the paper, here: https://github.com/Kamel773/SourceCodeClassification. I noticed that the dataset is pretty messy and a lot of the css/html/javascript is misclassified as another one in that group. That is apparent in the confusion matrix:
I did find another data set of classified code on Kaggle, which is way bigger (called “lots of code”) (and hopefully has less misclassifications) that I am going to try to use to improve the accuracy even further.
I am having way too much fun with this course. Thanks everyone!
Excited to share work predicting 12-year mortality from chest x-rays. Deep learning can extract prognostic information about health and longevity embedded in routine medical imaging. Made using fastai.
tl;dr First attempt at active learning for fastai library. Includes random, softmax, and Monte Carlo Dropout std calculations to measure the uncertainty of an example for a machine learning model.
I thought it would be cool to have active learning as a part of the fastai library. I am not an expert on active learning but thought it would be a great way to learn about the field and different algorithms. P2 of the fastai lessons has been very helpfuf in implementing the gist.
The implementation separates the uncertainty measurement from the active learning selection process. There are two options for selection:
Select x most uncertain examples from the entire dataset
Select x most uncertain across all batches in dataset.
I hope to continue working on the gist and fully implement different papers. If anyone wants to contribute or finds bugs feel free to pm me.
Shout out to @mrdbourke for implementing Monte Carlo Dropout in fastai.
I’m happy to present v1 of a comprehensive intro tutorial to geospatial deep learning (focused on building segmentation from drone imagery in Zanzibar) using fastai v1, the latest cloud-native geodata processing tools, and running fully self-contained on Google Colab for ease of learning (and free GPUs!):
Given there’s a lot covered here, I’m sure that I missed many things (bugs, mis-assumed knowledge, janky code, bad links). I appreciate any and all of your feedback to make the next versions of this tutorial (and next ones) even better so thank you in advance!
I watched the Part 1 video last year, with fastai v0.7, and just i was amazed to see how much better fastai performs in comparison to other deep learning libraries. I then wondered how would it perform against itself, and needless to say, the library did not let me down. I found a paper written by one of my college senior in early 2019, using a thermal image dataset. At the time, they got a best case accuracy of 97.08% and a validation loss of 11% using resnet101 and fastai v0.7.1, which was achieved after multiple parameter modifications and model tuning.
In July, 2019, I present, fastai v1.0, resnet50 and 10 minutes of coding:
Model Accuracy: 99.38%
Training Loss: 1.4%
Validation Loss: 1.7%
After taking the first two class of part 2 of 2019, I was able to get how autograd works and how it was used in pytorch. And to be sure I grasp the concept, I created a simple autograd in javascript and also create a pytorch like implementation in javascript.
It was implemented in obsevablehq js interactive notebookhere
and also you can help me check this medium post draft explaining the basic concept here
Spending some time applying the approach really helped me understand the concepts more deeply, although I’m still not sure I fully understand how the time series aspect works with regards to trend. I noticed that in the Rossmann notebook there are no ‘recent trend’ type variables i.e. something that describes the days and weeks leading up to each observation. Of course, this information is still there in the other observations, so I’m guessing that the ‘date’ (day, month, year) embeddings encompass this in the network e.g. October’s embedding encodes in some way that September is ‘nearby’, meaning that the model can account for recent trend (assuming it is predictive).
Loved the first part of the course so going onto part 2 now!
Created a web app and deployed to Heroku. Despite an okay confusion matrix (see the notebook), the model often gets confused (try images from the homepage, which are in the training set, or Google). Need a bigger dataset?
Hi everyone! I just started ths course and I’m super pumped!
Following the suggested practice for lesson 1, I built a classifier of paintings into their art period. I used 300 google image search results for each of the 18 classes. My model has ~50% accuracy. I am not entirely sure how good this is so I would appreciate any hints or insights into the work.