Share your work here ✅

Still doing the first part of the lessons. Just finished the second video.
I started messing around with a dataset I found on kaggle on Arabic handwritten letters (https://www.kaggle.com/mloey1/ahcd1)

At using the frozen resnet34 I got an error rate of ~ 0.086 which is not bad using 6 epochs.

I unfroze the model and was able to get a way better accuracy/ the error was cut to more than 50%:

1 Like

No, I don’t have one. If I come across some, I’ll let you know. Thanks

Hey peeps. So I trained a neural network that can tell wether a given building belongs to one of the three architectural styles: Art Deco, Brutalism or Japanese traditional. Not sure of what utility it can provide, but if you think that sounds interesting you’re welcome to check it out here. The process of constructing everything was a lot of fun. Huge thanks to Jeremy and everyone involved in building & maintaining the course, haven’t had such a pleasant learning experience in quite some time.

Hi there!

As someone who is just delving into AI for the first time, allow me to share what I’ve been working on the past few weeks.

I’ve created an AI model using fastai’s pretrained Resnet models(largely following steps from lesson 1 & 2) to detect whether a face is wearing a mask or is wearing a mask. Basically a Mask Detector.

I’ll post my github project page here. If you’d like to take a look at it, use the Mask Detector.ipynb. In that notebook you’ll be able load a Youtube video and apply the AI models there. Also included is a webcam streaming module so you’ll be able to test it out on yourself.

Instructions for the dependencies are also listed there such as where to download the models and face detection algos as well (I’m using MTCNN).

I’ve gotten great results on pictures but still working hard towards improving accuracy on video streams with small faces. Any tips or feedback is highly welcome!

Thanks, Jeremy for introducing me to this world!

3 Likes

Hi all,
I tried my hand on Kaggle Competition: Tensorflow Speech Recognition Challenge to classify 30 words.
Used the regular Image Classification notebook to classify spectogram images of words and achieved 93% accuracy. Here is the link to my notebook.


3 Likes

Hello,

Tried to classify between different sport balls; baseball, cricket, golf, pingpong, rugby, soccer, tennis, volleyball … and a egg. Kinda random, iknow. Anyway, used 70 pictures of each class. Tried to follow the steps. Didn’t think i’t work the way it should’ve, didn’t really like the loss / valid numbers i received at the end. Tips for improvement?

Just wanted to give back to the great community:
Here an End-to-End Tutorial in Arabic Character recognition:

1 Like

Hi, I have added the post where I have described how to run FastAI model (and optimize with TensorRT) on Jetson Nano device. The whole is on docker thus the setup is really simple.
For pets classification (from course) I have achieved:

  • without TensorRT – average(sec):0.0446, fps:22.401
  • with TensorRT – average(sec):0.0094, fps:106.780

Read more on:

5 Likes

Hi all!,

I compiled a guide on creating a representative test set for machine learning models and published it on Medium. Any feedback is welcome!

2 Likes

What do you get when you take the world’s best minds and feed their advice into GPT-2? Instant mentorship!

Head to https://mastermind.fyi/ and get AI-generated advice.

I also believe this is the first app with a @jeremy easter egg! From your computer browser (works on Chrome), go to the footer of the app and type ‘jeremy’ on your keyboard to hear the key to learning AI fast :wink: (turn on audio).

Even though the app was built with a Hugging Face script (repo below), I credit Fast.AI and this community for giving a finance guy the confidence to build something outside of Excel.

1 Like

After finishing the 1st lessons, I was thinking why not try to deploy that model. Even though, I know that it will be taught at a later stage. But learning by doing is definitely more interesting. After 3 hours of googling and reading Fast AI documents. I have success to deploy my very 1st model.

https://test-dog-cat.onrender.com/

Hey guys check out this blog I wrote :

Any suggestions are welcome :slight_smile: !

Thanks to Fastai, I have created a model to predict Supreme court judgments from India. Hopefully, this is a first step towards creating a bigger solution that can be adapted in India’s judiciary to help alleviate the huge pendency problems there.

Please feel free to share comments or get in touch. thanks.
Link to Medium article

You can find the code at the bottom of the article.

1 Like

Thank you Fast AI team for making this course and an amazing library !
As homework for my lesson 1 , I decided to try and tackle this kaggle challange : Plant Pathology 2020 - FGVC7 (https://www.kaggle.com/c/plant-pathology-2020-fgvc7/overview)

Fastai library is like magic … i was able to achieve a score of 0.964 just by following a procedure similar to the one taken for the pets dataset in lesson 1 !

Thank you so much and i cant wait to proceed to further lessons !

you can check my work here if you are interested :smiley: : https://github.com/AtharvBhat/ML_experiments/blob/master/Plant%20Pathology%202020%20-%20FGVC7.ipynb

2 Likes

Hello everyone!

Status update on Kaggle’s COVID19 Global Forecasting Competition and my work on it, using only fast.ai models and almost no feature engineering.

I decided to move to fast.ai V2 to achieve better performance - that’s what I highly recommend to all of you after this experience. In Week 4 of the competition, after doing some googling about neural nets and tabular data, I found that TabNet may be a good choice, particularly because it has already been implemented for fast.ai V2 in this fast_tabnet extension. So switching from normal tabular model to TabNet took effectively 5 lines of code, including setting hyperparameters - a big praise for the author of fast_tabnet module. Then I played around with adding some more data and training more and more, which resulted in pretty good final private score - 58th among 472 competitors! The notebook was shared on Kaggle’s fanpage which also gave it some momentum - thank you folks! The performance of the model turned out to beat some RF/XGB approaches with small amount of feature engineering. My final thoughts are that having some RNN model for fast.ai tabular in this competition could be much more efficient - I’m looking forward to see such modules :grinning:
Week 4 Kaggle notebook

The overall metrics (simple RMSLE on predicted cases for each place and day) was widely criticized by the top Kaggle competitors on the forums, not mentioning the gradual slowdown of the epidemic itself (in terms of exponential growth) - that’s what led the organizers to change the metrics to some, in contrary, weirdish and complicated Weighted Pinball Loss along with quantile regression with predicting 0.05, 0.5 and 0.95 quantiles instead of just one number of cases. So that was sort of a challenge for me, to understand and implement those metrics and create a custom loss function in PyTorch. After some coding, debugging shapes etc. I finally managed to get the things done, again in fast.ai V2. This time I resigned from TabNet and used a vanilla Tabular 2-layer MLP model from the library, hoping that it can perform better on daily predictions (in weeks 4 and before, we predicted cumulative cases) and will be easier to train. The main difficulty was the loss function - I implemented a L1-ish thing for quantiles, which couldn’t train effectively for more than 15-20 epochs on a small learning rate, which is nothing special for tabular models. Then I created some L2-ish loss based on that, hoping that it will allow for longer training and thus better validation metrics. Unfortunately the predictions didn’t look really sensible and I didn’t have much time (I started 2 days before the deadline) so I moved to the previous L1 thing. The first private score (partial) results came as a surprise - I ranked 8th out of total 173 competitors! (current private leaderboard here). Now it’s something 10ish after the latest update, so still not bad at all for me. Again, I decided to share the notebook before the deadline My week 5 Kaggle notebook and it turned out some folks forked it and modified, and they also rank high in the current leaderboard - 2 another fast.ai submissions in the top 20! Thank you very much for the library, long live fast.ai!

4 Likes

Hi everyone, I wrote some code that generates animated plots to visualize optimization paths for neural nets, inspired by the paper “Visualizing Loss Landscapes of Neural Nets” mentioned in the lessons. Here is my Medium post.

Let me know if you like it and feel free to leave comments. I’m thinking about making it a lightweight Python package for more generic use so that people can quickly generate animated plots like this. Cheers!

2 Likes

Hi logancyang I hope you are having a wonderful weekend!

I just read your medium post and found it delightful, as we know there are a number of learning styles such as visual, auditory, kinesthetic etc. Learning styles.

I think any work you or others can do, in helping make models easier to visualize and understand is not only wonderful but has, very real and serious implications for people in general.
For example AI models are being implemented in many domains and are impacting peoples lives in both positive and negative ways.
Having recently completed a lesson on ethics with @jeremy and @rachel it is clear that many models can be biased sometimes by accident and sometimes on purpose. Yours and others visualizations make it far easier for lay people and ML specialists to explain how a model is working.

I think a light weight package would be very useful.

I am not registered with Medium so I couldn’t give you a clap :clap:

Great work!

mrfabulous1 :grinning: :grinning:

2 Likes

Thanks for the comment @mrfabulous1, glad you like the article! I’m adding the packaging step in my todo list and will prioritize according to people’s feedback. Appreciate the kind words!

Hi Logan Yang,

I enjoyed your Medium post very much! Totally agree that developing intuitions through simple numerical experiments will make us better designers.

After looking at your animations, I wonder which architecture would generalize the spirals? That is, extend the spiral boundaries in the way we do easily and naturally. If a model cannot handle such a simple extrapolation, machine learning is missing some fundamental capacities.

(I do understand that *,+, and ReLU can only fit lines and polygons. But does this impose a fundamental limitation on what can be learned?)

1 Like

Hi @Pomo glad you like the article! I recently listened to Lex Fridman’s interview with Ilya Sutskever and it made me think about how some of the visual abilities we take for granted can be transferred to neural networks. I don’t think there is a universal way of this transfer yet since we don’t know how our brain works. But there definitely can be ways to tackle specific small problems.

The universal approximation theorem tells us that neural networks can approximate any function. Take this spiral data as an example, we could give it some prior, say, we formulate it as a regression instead of a classification and let it fit the spirals. An even stronger prior is to manually tell it to fit a spiral-like function in polar coordinates. Of course, that defeats the purpose of letting it learn the form by itself. But you see, it’s hard to draw the line how much of prior knowledge we can give it.

As for the question of how we intuitively do this, I think what we do is just a kind of regression in our mind, so it’s not too different from pre-programming with a prior. We are more sensitive to simple functions that exist in our world. e.g. spirals in nature like a cyclone, spiral shells, galaxies, water swirl, or just in a math class. So I think given a certain task formulation and a certain number of examples, artificial NNs are capable of solving these specific problems.

3 Likes