Share your work here ✅

Hi there,

I just put together a little tutorial about PyTorch DataLoaders and collate_fn that I wanted to share with you.
I think It will help everyone trying to understand a bit better the inner workings of PyTorch dataloaders in general and the collate_fn in particular.
I present a very concrete example with toy data to explain what “collate” actually means and also show how to implement a custom collate function to make dataloading work with sequences of different length in an efficient way.
Take a look at it and of course any feedback, comments, ideas are more than welcome :slightly_smiling_face:

Cheers!

4 Likes

Hi All,

Sharing another blog post made. Created a spam detection model using tabular data with different types of models. Take a look if you’re interested. Roughly coincides with Lesson 6:

Link

2 Likes

Hello!

Want to share a LLM application that aims to help the workflow of whoever needs to write something easier.

Motivation of building:
We took @jeremy 's advice, and want to share and help whoever may find themselves in a similar shoes a few months ago.
How do we do that? Through learning by doing, learning by writing, and learning by sharing!

Join our discord to give us feedback! Castly

1 Like

Hello All,
Based on the Lesson 1 i made a Chipa or Donut classifier. “Chipa” is a type of small bread-like baked good that’s popular in Paraguay and other parts of South America.

ChipaOrDonut

2 Likes

Hi, Everyone,

After a long break from ML I’m refreshing my skills with fast.ai again. here is a simple car tyre classifier to identify illegal and legal tyres. The predictions works quite well considering the small data sample. Lots to do to improve this.

Hi all! I’m Tony. I’m a robotics engineer with a background in Electrical Engineering, who has found himself doing more and more software as time goes on.

After a long time of wanting to learn more about ML and Deep Learning, I finally started actually going through the course today. For a first project, I decided to build off the “Bird or Not” example. Since my kids are super into card games, I trained a classifier to predict if a given image is showing a card from Magic: Tha Gathering, Pokemon, or One Piece

I was a little shocked to find that by fine-tuning a pre-trained resnet model, even with pretty cruddy data that was scraped from the internet, and doing no QA on my side, the classifier can do a pretty decent job with just a few rounds of fine tuning! I was also shocked to see that fine tuning is pretty quick, even using only a CPU with no acceleration! (At least for this model)

While it isn’t a lot to look at, and nothing new to folks who have done the first week of the course, I’m using this as an opportunity to share and introduce myself to the folks on the forum!

Here’s the link to my AI image grass detector Kaggle notebook
Try it out!
check-it-out-shock

Hi everyone,

as my first experiment after lesson 1, I did a classifier of Fly vs. Mosquito. And I believe I got a decent result.

img2

img3

This is a: fly.
Probability it’s a fly: 0.9995

Thanks Jeremy for sharing these fantastic and inspiring classes.

1 Like

I made an MNIST classifier after reading ch. 4. Was a bit tricky to deal with the 10 different labels but I ended up using one-hot encoding with softmax and cross entropy loss. Have a look and let me know. I was able after 80 epochs to get it to around 97% accuracy.

I’m on lesson 2, and I’ve made a car year predictor. For training, I used classes for car years from 1940 to 2020 in multiples of 5. For each of those years, I grabbed images for about ten car makes. 15 of each make for each year. For example, search_images_ddg(‘1980 toyota’).

I had to delete a bunch of motorcycles and airplanes.

I made it output a Gradio BarPlot with the probability of each year, and hosted it on HuggingFace: Car Year - a Hugging Face Space by loraxian

I wanted to also include a heatmap of the areas of the image most important in the prediction. I tried asking ChatGPT for the code to do it, but it was not working and too complicated for me to understand.

2 Likes

Not sure if you saw the option or not, there is a built-in interpretation view, although it isn’t quite as fancy as doing your own custom grad-cam. :slight_smile:

Add interpretation="default" to the gr.Interface(...) call.

Thanks for letting me know about that. Although just adding that line doesn’t seem to do anything but add an Interpret button. The examples are quite a bit more complicated, and I’ll probably have to come back to it.

Odd. This is what I see on mine when it runs with interpret.

Oh, I see. outputs has to be gr.outputs.Label(). “default” seems to give basically no information, it seems to just highlight random parts. I was kind of hoping that it would point out certain curves on the car that made it choose the way it did. I’ll have to look at the other options for interpretation.

Also, I have no idea how this works. Doesn’t it need access to the model’s inner workings to make any kind of interpretation? All it’s being given is a list of probabilities for each class.

Thanks for bringing this to my attention, it’s very interesting.

Edit: Oh, so I guess Gradio is smart enough to search the prediction function for the model and do weird stuff with it. I also tried using interpretation="shap" instead, and now it seems to outline the car, but still not really showing what features on the car led to the classification.

Building Speech-to-Speech Translation for linguistic languages here in India. I would love to know your opinions and am up for mentorship

1 Like

Hi, my project is tuna type detector for sushi lover. The model classifies 3 types of tuna, lean, mid fatty, and fatty tuna. I made this model for myself cuz I always feel struggled to tell which one is fatty or not. This classification task is troublesome even for human, so I assume the DL model also though it difficult too.

project link

1 Like

Hello FastAI friends!

I made an image recognizer which can give sensible predictions even when the image being presented isn’t one of the classes contained in the training data. Check it out!

TLDR: I changed the CategoryBlock to MultiCategoryBlock which changes the loss function used to BinaryCrossEntropy loss instead of CrossEntropyLoss. Then I used a threshold to filter out predictions with a low score. This approach wouldn’t work if I used CrossEntropyLoss because the function drives the highest output activation up towards 1.

I hope this page helps some of you who are wondering why your cat/dog detector still confidently predicts that something is a cat, when really you showed it a picture of your face.

As some of you may have noticed, training an image classifier is fairly quick using the example from Lesson 1. But the resulting model still predicts confidently one of the training classes when the image isn’t from one of the classes of image used to train the model.

The preferred behaviour would be for the model to be less confident when presented with an image from outside the training data.

I got all the info from the fastai 2022 course and the docs. The post has just been sitting doing nothing so thought I would share it here.

Please let me know if you have any feedback, and especially if you’ve also done something similar when building a model with an ‘out of domain’ prediction.

2 Likes

If wishing to quantify the uncertainty further. Take a look at the methods of ‘conformal prediction’.

1 Like

Thanks so much!

Hi everyone: I created a model to identify edible vs poisonous mushrooms and I was surprised that it worked the first time. edible_vs_poisonous_mushrooms | Kaggle

It was fun but I didn’t really learn anything, just cut and paste the codes from lesson 1. Can someone suggest what I should do next and what I should focus on?

Thank you in advanced for your help.