Share your work here ✅

Pretty much. And I added a few tricks to make it work for iphone. It’s live now and growing in usage: https://apps.apple.com/us/app/shot-count-basketball-ai/id1486237449

I’m going to build an android version soon

1 Like

I built a simple X-ray classifier to analyze and categorize them as either Hand, Foot or Chest images.
Dataset
The image dataset was generated from a google images search using this handy tool
https://addons.mozilla.org/en-US/firefox/addon/google-images-downloader/?src=recommended

Deployment
The model is deployed on Render at X-ray Classifier
This was built using thisexample.

Model
The code for the model and tweaks are based on the work from Lesson 1 and Lesson 2 using Resnet34.
Confusion%20matrix Learn%20fit
The model had an accuracy of about 96% but had some difficulty differentiating between hand and foot Xrays, perhaps because both have similar architecture.

The code is available in this Jupyter Notebook on GitHub.

Possible Use Cases
As a radiologist, speeding workflows for image reporting saves both the patient and referring physician time. Implementing AI for image triage and sorting is one way that deep learning can be used.

I am just kicking off my Fastai V3 course(Lesson 1 and 2) and hope to have more projects to showcase with real-life datasets from my research and work in Africa, where I believe AI is one solution to the shortage of access to radiologists and other cadres of health personnel.

Do check out my work and any feedback would be appreciated.

5 Likes

Hi everyone! I’m an astronomer working on understanding how other galaxies grow over cosmic time scales. Mostly I’m curious about the processes that allow or prevent a galaxy to form new stars out of gas clouds (i.e., generally we find that galaxies with more cold gas tend to form stars at a higher rate), but anything related to galaxy evolution will hold my interest!

Recently I submitted a paper that examines how galaxy gas content, optical-wavelength morphology, and environment (basically the density of neighboring galaxies) affect each other. The objective was to directly predict a galaxy’s gas-to-stellar mass fraction from an RGB image, like the examples shown below:

Deep CNNs (i.e., the xresnet + @Diganta’s excellent Mish activation function) coupled with @LessW2020’s Ranger optimizer were essential for achieving accurate results. Afterwards, I changed the problem into a classification task so that Grad-CAM could be used for visual interpretations (here the credit goes to @quan.tran for an amazing implementation in Pytorch/Fastai). Below are some examples where you can see the highlighted portions of an input galaxy image (left) that correspond to low gas content (center) and high gas content (right):


Finally, I spent some effort checking to see how well the learned relationship between a galaxy’s gas content and its visual appearance may generalize to galaxies in different environments. But that goes a bit deeper into the astrophysics at play, and you can read all about it in the paper! :slight_smile:

As you can see from my tags, I couldn’t have done it without the Fastai developers and community. These forums have been an invaluable resource, too. I’m so grateful for all the work you do!

EDIT: added prettier Grad-CAM pictures from the paper.

22 Likes

Hi Sarvesh,

Thank you for sharing this dataset, please where did you get the images from?

I managed to get an accuracy of 97.2% using resnet101.

image

Setting up my local workstation for fastai v3 2019 tutorials.

Through lesson 2 OK. Quick enough for my needs. Refurb workstation about $800 all up

Workstation

HP Z230 Tower Workstation (D1P34AV)
Quad core Intel(R) Xeon(R) CPU E3-1245 v3 @ 3.40GHz
Ubuntu 18.04.3 LTS

nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2017 NVIDIA Corporation
Built on Fri_Nov__3_21:07:56_CDT_2017
Cuda compilation tools, release 9.1, V9.1.85
4GB graphics RAM 

% lspci | grep -i nvidia
01:00.0 VGA compatible controller: NVIDIA Corporation GP107 [GeForce GTX 1050 Ti] (rev a1)
01:00.1 Audio device: NVIDIA Corporation GP107GL High Definition Audio Controller (rev a1)
(base)

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.48.02    Driver Version: 440.48.02    CUDA Version: 10.2     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 105...  Off  | 00000000:01:00.0  On |                  N/A |
| 45%   45C    P0    N/A /  75W |    866MiB /  4015MiB |      1%      Default |
+-------------------------------+----------------------+----------------------+
                                                                           

Conda Environment

I have a base environment.yml that disallows any leaks into fastai module from conda defaults. This seems to work well on my machine as long as i manage batch-sizes.

The ‘::’ qualifier for libraries forces the channel before the symbol. This is undocumented, but officially supported. I have requested that Anaconda document it.

name: Fastai.V3
channels:
  - nodefaults
  - fastai
  - pytorch
dependencies:
  - anaconda::python=3.7
  - fastai::fastai
  - anaconda::pip

Notebook Environment

For running fastai notebooks, I augment this as follows:

name: Fastai.CourseV3
channels:
  - nodefaults
  - fastai
  - pytorch
dependencies:
  - anaconda::python=3.7
  - fastai::fastai
  - anaconda::pylint
  - anaconda::pip
  - conda-forge::black
  - conda-forge::nb_black
  - anaconda::jupyter
  - anaconda::notebook
  - anaconda::nb_conda
  - conda-forge::jupyter_contrib_nbextensions

Other stuff I use

nvtop, for monitoring GPU load, processes and memory works on my local machine, and compiles and runs ok all servers I’v tried.

autoenv switches environments - including conda, as I move between directories. manages paths and environment variables as well.

2 Likes

I just finished comparing fastai’s tabular module vs a variety of newer baselines, you can read it here:

1 Like

So I was working on Spam Classification project and trying several ML models like Random Forest, SVM, etc and the highest accuracy I could achieve was using Random Forest:

  • Precision: 96.48%
  • Recall: 95.36%

So I just tried the code from lesson 4 and the accuracy jumped to:

  • Precision: 98.28%
  • Recall: 99.61%

and here are a couple of predictions :smile:


False = Ham
True = Spam

Full Project(ML+FASTAI) Repo

3 Likes

I had some fun generating butterflies using a StyleGAN, created from a massive dataset available from the Natural History Museum in London. Of course, it was obligatory to publish the results to thisbutterflydoesnotexist.com. A bit about how I built it is posted here. Video of interpolation between 20 butterflies

Examples:

11 Likes

@digitalspecialists now that is cool!!! Can you link where/how you built it? :slight_smile:

2 Likes

I’ve been working on investigating self-supervised learning which Jeremy recently posted about here.

Blog post

Corresponding notebooks

As we’ve learned in the fast.ai course we always want to start our computer vision models with some understanding of the world instead of with random weights. For almost all vision tasks this means we start with a network that was pretrained on ImageNet. This is great and works very well lots of the time but there are some occasional problems with this approach. For example pretrained ImageNet weights don’t seem to work as well on medical images as they do on natural images.

This is probably because of how different the two are. For example:
image

Self-supervised learning is a set of techniques in which we try to train a network without labels on a pretext task so it will train faster and to a higher accuracy on a downstream task.

The pretext task I looked at is called “Inpainting” and it involves removing patches from images and training a neural network (eg. a U-Net) to fill in the missing patches. For example (removed patches highlighted in green):

image

After we train the network on this task, we take that same network and train it on a downstream task that’s actually important to us (classification, segmentation, object detection etc.). For my experiments I trained a network to do classification on the brand new Image网 dataset released by fast.ai. (More in my blog post about why Image网 is so useful for self-supervised learning research)

In the end I found that even with this simple pretext task we could get 62.1% accuracy on the classification task compared to 58.2% that we get when we train with random weights.

I’m planning to continue investigating the effects of different pretext tasks on downstream performance. The dream goal would be to find a task or collection of tasks that we could use to pretrain a network that would be competitive with pretrained ImageNet weights.

17 Likes

We implemented a pretty amazing 2019 paper for image similarity / image retrieval using fast.ai. It’s of much lower complexity than other state-of-the-art methods (e.g. no triplet mining required), as fast to train as regular image classification DNNs, and has results which are on-par or better than the best previous published results.

Repository: https://github.com/microsoft/computervision-recipes/tree/master/scenarios/similarity

11 Likes

This is great. I love work that shows progress on low complexity solutions.

1 Like

Great work and super nice repo!

1 Like

Hi everyone,

Based on week 2 notebook, I made a mushroom classifier… only for the most 10 common mushrooms. about 85% accuracy on test data. Seems to work decent on random mushroom images you can get online! Any feedbacks are welcome!

what-is-this-mushroom.onrender.com

1 Like

Base on Lesson2, and it works pretty well(I used 1920x 1080, I think I am wrong because I may not need to use this kind of a large size of image):

It can even work on the grew up version of Shio without having a grew up version in the training set.


Gthub
data_set_that_I_built

deployed and please feel free to upload your image: https://github.com/JonathanSum/Deep-Projects/blob/master/Character_idenf_deploy.ipynb

2 Likes

Hi, Hyungue Lim.
It is pretty interesting. Although I hope the accuracy to be higher, I will use your project to identify whether it is a poison mushroom or not.

But bad. I hope you will build more model in the future.
I tried your model, and it works pretty good.

1 Like

Hello! Thanks for trying out!
Yes, I believe the accuracy could be higher with more data. Getting mushroom pictures was not as easy as I thought. There were not as much pictures of specific mushrooms I thought I would find and often, different mushrooms in a same picture.
It would be interesting to see how yours turn out to be!

1 Like

Great news for the liver transplant business! :smiley:

Joking aside, please check out my posts on deadly mushroom identification, and remember that many people naively believe that AI can do anything perfectly.

2 Likes

Hi

I just published my work on New York City Taxi Fare Prediction (Kaggle competition).

I used pure PyTroch for building a Tabular Model.

Take a look and pm for questions!

1 Like