Share your work here ✅

Image_Classifier with github actions workflow

I just made my first model using the Jeremy’s video model.
Its purpose is to classify in two sides: NBA players or tennis players.
It’s basically the same model of the notebook.
https://colab.research.google.com/drive/1J8FY7BpVOKSUQB9TyfMiyYxUjWs-IogD#scrollTo=hXLEG8Eypzd7

Hi all,

I recently started FastAI DL course, and slowly moving through it. Last week I finished Lesson 1 last week but then realized that people were uploading their homework. So I started to search for a dataset in Kaggle.

Here I saw, Brain MRI dataset to predict the presence of brain tumours. This was the first time I used FastAI, so I read the basic parts of the document a lot.

Anyways, here’s my work;

1 Like

After 2 lessons I created Jedi/Sith detector. The model isn’t that great, because it sometimes thinks images with a dark background are Sith and images with a light background are Jedi. I need to train it with more diverse images.

You can try it here:

Some examples:


So, after watching the 1st lecture and implementing an image classifier model for different objects (other than birds) I thought why not help my college mates and get their hands dirty with deep learning. I created a blog using Quarto (coz fastpages is deprecated? ig) and I posted this tutorial.
Also, can someone please suggest python libraries to search and download images? The DuckDuckGo one seems to be hit and miss for me since a few days.
P.S.: This course is just :pinched_fingers:

After 2 lessons, I have created a classifier that can identify one of the the super heros (Batman, Superman, Flash). Check out this link

I recently watched the first lesson and as suggested I tried building a binary image classification model. I do not yet completely understand the code and how it works but I hope as I go through this course, I will develop a better understanding of how all this works.

I tried to classify healthy plant leaves from diseased plant leaves, and the model does a pretty well job at it. I made it on Google Colab and reuploaded it on Kaggle.
Here is the link to the Kaggle Notebook:

Let me know what you think :innocent:

1 Like

I’ve just completed my first session coding (is it a bird or not)
actually I have so problem with downloading the images (list was empty).
I’ve solved it by making changes in search_images function .
this is my notebook in Kaggle.

let’s go for session 2…

1 Like

As a homework for the Lesson 1, I tried to put a twist in the exercise of comparing two things, and allow the user to compare any two things based on their input. Also, I wanted to persist the trained learners between sessions, so the program save them in a SQLite database. Besides creating a new learner, at any time you can test the saved learners with (presumably) new photos.

Completed lesson 1 and wrote a small blog post with some examples. Thank you for this wonderful course and onwards to lesson 2. :rocket:

1 Like

Hello,

I realy like the playlists by Jeremy. Recently I do this little prompt engineering course and I thought that it could be usefull to have video transcription textes as prompts and organize it in collections. For this I wrote a little code snippet that you can run in a colab notebook (all you need is a google account - if you have an openai-account you can use extra features).

It manage collections of playlists (examples with some playlists by Jeremy are inside) with its videos, takes transcriptions (bilingual) to:

  • reorganize (clean) timestamp structure
  • take it as prompt input

You have some other nice features too.

Here is the link to the github repo.

enjoy

Hi All,

Great course. I am already finding great value to the tools and the lessons. I just finished Lesson 1.

I built an image categorizer for identifying risks in the aftermath of a hurricane in tropical climates:

I am part of a non-profit that developed an app in the aftermath of Hurricane Maria to crowdsource outages.

One of the challenges we had in PR after Hurricane Maria was the ability to quickly identify the type of damagedscause by the hurricane, specifically damage done to the power lines. Having trees blocking the streets was also a challenge since it delayed the recovery efforts to parts of the island that needed energy back the most urgent. I changed the model Jeremy shared to identify the four categories:

  • Power line blocking a street
  • Tree Blocking a street
  • Power line sparking
  • power line and tree blocking a street

The model needs a little refinement and I plan on doing that for Lesson 2, but it shows promising results.

I am thinking a model like this can be used to categorize and geolocate user submitted photos of what is causing an outage, independent of the atmospheric event. This could help save lives and expedite the community-led recovery effort.

Best,
Héctor

A few weeks ago I shared my notebook implementing some of the paper: [2305.08891] Common Diffusion Noise Schedules and Sample Steps are Flawed

Since then, I found an issue on the Huggingface diffusers repository requesting these changes to be implemented in the library, so I made a pull request (Fix schedulers zero SNR and rescale classifier free guidance by Max-We · Pull Request #3664 · huggingface/diffusers · GitHub). Now it’s been merged, and I’m a contributor to the diffusers library!

2 Likes

Suppose you got a video of animals in the Savannah or sprinters in a race and you’d want to group the faces of the people or faces. I used traditional ML, k-means clustering, to be able to cluster a group of extracted faces from any video of your choice.

Face Clustering : K-Means Clustering
Extract Faces: Extract faces from videos

The two notebooks above are python script in a notebook. I needed a way to share with you the full scripts .

Your directory structure should look like this:

project/
├── extract_faces.py
├── cluster_faces.py
└── videos/

Add your desired videos under videos

Any and every feedback is welcome.

1 Like

After first lesson, I browsed through datasets and found something that is pretty similar to the bird or not usecase.

I suppose it’s easy, but I can totally see how it could be useful :slight_smile:

Here is my clock classifier notebook! it’s about two kind of clocks: Digital and Analog
MIQqew16OCmFsETDdZZC_IIz3d?usp=sharing)Google Colab

Hello :wave:

I found the lesson 2 of the course amazing, so I tried to deploy a waste sorter that will tell you in which trash bin you should throw which item, following the recommendations of the city of Strasbourg, France => link to the notebook

link to the app => https://waste-sorter-ee623vdf6q-ew.a.run.app/

The process of training the model first and use it to clean the data was actually quite fun, and it’s very stimulating to see the confusion matrix get better after each iteration and tweaking the dataset to be representative of the domain.

I found out during this phase that I needed to add more pictures in certain categories (especially between blue and green) in order to help the model differentiate better, I don’t know if that’s bad or not actually (to have a category more represented in the dataset when the confusion is higher on this category)…

I then exported the model, just as the lesson 2 says, and I tried to deploy it on a serverless container on GCP Cloud Run; during this part, it was hard to dockerize the stack, and I finally resorted to use a fastai image in which I installed/re installed some dependencies so that they match the ones used by the learner in the environment of my laptop.

The solution is functional, but the response times are quite slow, I guess this is because the image is so huge (more than 2GB) so startup times for serverless containers slow down the process I guess. Anyway it was a fascinating experience ! I opened a few issues to enhance this application =>

Feel free to contribute and feedback !

4 Likes

Thank you Jeremy Howard for the deep learning course. I have been kaggling on my own and really enjoyed your first lesson. I used my own portrait dataset to vary the task. Kaggle(FREE one) was extremely slow and inconsistent.

street.jpg===>This is a: street.
Probability it’s a portrait: 0.0001
portrait.png===> This is a: portrait.
Probability it’s a portrait: 1.0000

1 Like

Thank you Jeremy. After lesson 2 I created my first Hugging Face model that detects if an image property image is a virtual staging image or a real image.

2 Likes

Thanks for course! I followed the lessons to create an insect classifier and posted to hugging face. 130133 images over 2000 species using ResNet18. Not enough images for many of the classes (as low as 25), but it does a pretty good job where more images were given. It’s pretty good at detecting classes that had fingers in the training images :slight_smile: