Share your work here ✅

Hello everybody,

I want to share some work from the previous months:

Thank you all for the great & encouraging community! The last months I really learned a lot from you! :smiley:

4 Likes

Hi @jeremy,
I want do deploy Image Classification on Android using Pytorch Android API.
It requires .pt file, but using learn.save() i am getting only .pth file.
So how can i convert .pth file to .pt

This should be helpful: https://pytorch.org/tutorials/beginner/saving_loading_models.html

The reason for the imbalance is that the dataset is actually 1/3 Normal, 1/3 Bacterial Pneumonia and 1/3 Viral Pneumonia. I recently tried classifying viral vs bacterial, parsing the filenames within the PNEUMINIA folder to get the labels.

Unlike NORMAL vs PNUEMONIA, where many have achieved accuracies in the high 90’s, I was only able to achieve around 77% accuracy for bacterial vs viral. I would be interested to learn if anyone has been more successful.

My notebooks can be found at: https://github.com/williamsdoug/ChestXRay

Hi,

I used the Fatkun image downloader. It has been mentioned in the forums in another topic. My dataset consisted of 670 images.

Thanks for showing me the importance if having an ‘I don’t know’ class. I feel there are two ways of making the Neural Network say “I don’t know” :

  1. Create a new class labeled as ‘unknown’ and fill it up with images of everything that is not related to your problem. However, this is very unrealistic.

  2. Try to extract the features from the Neural network and pass the features through a Machine Learning classifier (Like a decision tree or SVM). This method however requires us to get some knowledge on Neural Network theory.

1 Like

Hi everyone! After 5-6 months of intensive studies I deployed my first Deep Learning app. It is a personal project that I have worked on and felt it would be cool to deploy it. The app allows users to turn random pictures into paintings of 3 old masters Van Gogh, Monet and Cezanne. You can check out the app at photo2painting.tech

The app is a demonstration of how CycleGan models work in production and is deployed on a Digital Ocean droplet (free with Github as I am a student). The project is still on development, so I am eager for your feedback. Feel free to contact me if you want to collaborate on the project. If you guys want to check the code, I have open-sourced it here

Here are some examples:


16 Likes

Hi Attol8 hope you are well!

Your app is marvelous, you have done a great job and have created a good application of the course content.
The app shows your 5-6 months of intensive study have been well spent!

Cheers mrfabulous1 :smiley::smiley:

Thanks for the wise words @mrfabulous1 :slight_smile:

Hello @henripal, @daveluo and @lesscomfortable,
I saw that you worked all with satellite images. I am currently accompanying a group of students from the University of Brasília on Deep Learning. One of the PhD students is working on a DL project with satellite images and would like to use a pre-trained model with satellite images but they have more than 3 channels. How did you handle this problem? Thank you.

A post was split to a new topic: Share you work here - highlights

Hi everyone!

I made a web app that understands if/how an image is rotated and “derotates” it. This idea came to me because when I take pictures with my phone, they don’t get a consistent orientation, I think because of the auto-rotate feature (or lack of it?)…

The code (including the training notebook) can be found on Github and the web app at derotate.appspot.com. I don’t know if the idea is useful in itself, but I didn’t find a lot of webapps that output images (and not a class like in fastai’s tutorial), so maybe it can be useful in that sense.

Eventually, I would like to make a web service out of it and call this function during an image processing pipeline, but I’m still a bit stuck on this step. Does anyone have a recommendation on how to do it?

10 Likes

@sebderhy
This can be a super helpful plugin for Gallery apps on our smartphones, Google Photos etc.
Great work!

1 Like

Hi sebderhy thanks for an immensely useful app.
Seeing yours, all the great apps and things people create on this thread and all the great work done by the fastai team and the community is truly inspirational.

Well done.

Mrfabulous1 :smiley::smiley:

1 Like

12-class sentiment classification of US Airline Tweets with standard ULMFiT - ~60% accuracy

Hello everyone! I’m really interested in deep learning for NLP, so I’ve been using it to train language models to do downstream tasks (document similarity, sentiment classification etc).

I had a go at this Kaggle dataset, and after relatively little training I got around 60% accuracy on 12 classes (positive, neutral, and 10 negative classes).

I’ve been playing with momentums and learning rates, but I never seem to be able to get much further. I wonder if anyone has any pointers as to how I could substantially improve this result?

Kaggle Notebook

Thank you a lot :slight_smile:

1 Like

Bro sound like a cool project, did you finish it already?

I recently gave a meet up talk on text classification using fastai.text
You can find my slides and jupyter notebook below.

2 Likes

Hey,

Here’s another experiment on video enhancement / superresolution I’ve been working on recently (and highly enjoyed doing!).

The idea is that since a video is a small dataset, if we start with a good image enhancement model (for example fastai lesson 7 model), and fine-tune it on the video’s images, the model can hopefully learn specific details of the scene when the camera gets closer and then reintegrate them back when the camera gets further away (does this makes sense?).

Here is a screenshot of the first results I got (more results screenshots and videos can be found on my github repository):

In my experiments, the algorithm achieved a better image quality than the pets lesson 7 model, which seems logical since it’s fine-tuned for each specific video.

I actually initially posted this work on the Deep Learning section, because I feel like it’s not finished yet, and I’m looking for help on how to move forward on this. I haven’t found a lot of work on transfer learning in video enhancement (did I miss something?) so far, although it looks like an interesting research direction to me. Do you think that this kind of transfer learning in video enhancement has potential? If so, what would you do to improve on this work?

Thanks!
Sebastien

3 Likes

Hello, team!

I recently wrote a Medium Article that I wish had been available when I started this journey. I feel like some of the questions addressed are encountered fairly frequently (and have even been addressed in this course).

Hoping this might be helpful to somebody and eager to continue to give back to the community that has given us this resource!

:slight_smile:

-Matthew

6 Likes

Hello Fastaiians

In my recent medium article, I wrote about a project in which I created a CNN based model to predict the exact age of the person given his / her image.

This is the link:

There are many new things I learnt while working on this project:

  1. Reconstructing the architecture of ResNet34 model to deal with Image Regression tasks
  2. Discriminative Learning Technique
  3. Image resizing techniques
  4. Powerful Image augmentation techniques of Fastai v1 library
  5. How to use Cadene pretrained pytorch models

Here are few results of the prediction:

As a test image to validate the prediction accuracy of my model, I used India’s PM Modi’s picture which was taken in year 2015 (when he was 64 years old) and checked the result:

and here is the result from the model:

image

Hope this can be useful to anyone who wishes to work on similar model.

Cheers!!
Abhik

11 Likes

Racket Classifier
Created my first GitHub entry, to create a classifier identifying Tennis, Badminton and Table Tennis rackets. I was surprised to get to 95% accuracy. The confusion matrix also makes sense that a few badminton and tennis rackets look similar in a few angles/crops.


PS: github also has the cleaned URL files if someone wants to replicate it.
This being my first GitHub entry, looking for experts to point out issues / mistakes / suggestions to make it better!

1 Like