Share your V2 projects here

I made an app to classify images of sea fishes. For the project I ran a 34 resnet on a database that has 44,134 images classified in 44 fish categories (for the moment, as I manage to increase my database I will increase the fish species)

The project was deployed with mybinder. The difference with the app shown in session 3 is that instead of using a jupyter notebook and voila, I used streamlite. The link to the project can be found here and the github repo can be found here.

2 Likes

The aim of the project is to enhance under-exposed Images. Hope you guys Like it!!


Example Images

14 Likes

pexels-anna-shvets-4226124

I just published my project on TowardsDataScinece. I used fastai v2 and experimented with Chest X-ray dataset on kaggle and I got 100 percent accuracy on the provided validation set (val set for this dataset is rather small but its what others used on Kaggle) which compared to other notebooks on Kaggle is the best result on this dataset and higher than others.
Click here to see the post.
I would be really happy if you let me know your comments about it :slight_smile: :slight_smile:

12 Likes

A new library:

I used this library for segment brain tumours:

6 Likes

wow. it took a lot longer to deploy this, than to train it.
First, I tried to deploy on Binder, but it seems that it does not work. I checked some of the ones that should work, but none does anymore. It seems broken.

However, behold the “What game are you playing” classifier:
a little classifier, that can detect what game you play, from a screenshot.

:dizzy_face: on Heroku the slug size is 935MB with the requirements from the course page and only the bear classifier example with a .pkl file of 46MB.

Finally: What game are you playing (on SeeMe.AI)

1 Like

Hi ringoo hope your having a fun day!
Well done for persevering, like you, I think the most difficult bit about creating any classifier model, is deploying it online easily at little or no cost, especially if one is still waiting to make their millions :money_with_wings: :moneybag: from it (pity the link makes you have to log in to see the model :question:).

Cheers mrfabulous1 :smiley: :smiley:

Not if you’re using CPU torch wheels. Were you using the instructions here?

2 Likes

@joedockrill Thank you for your comment. Yes, I used the instructions. The pytorch version in the Procfile is definitely the cpu version. I just copied it from the instructions.
It seems that I included the wrong repository on heroku and it used the myBinder requirements. Now that I have repeated all the steps, it seems to work.
I have created a separate repository for the heroku deployment: https://github.com/mesw/whatgame3.git, if you want to have a look at the files. Now the slug is only 368MB. Nice!

If you would like to take this little experiment for a spin, you can find it here:


too bad, there is a voila error (the same as with myBinder) and it does not work. :unamused:

Guess the Doodle App
I’ve created this App based on the knowledge from Lesson 1 &2, completely in Jupyter Notebooks giving some Material UI touch to the App.
Got inspired by Google Draw and built one of my own using FastAIV2.
Currently supports 5 Doodles (bird,cup,dog,face,fish).
Please do give it a try.

8 Likes

Hi hitchhicker30 hope all is well!
A great little app with an original front end very different from the most common starter code classifier app.
Good work.
Cheers mrfabulous1 :smiley: :smiley:

1 Like

Thank you!! @mrfabulous1

Hi everyone, I released my article on end to end image classification last week.


I have tried to cover the entire spectrum, right from gathering data, cleaning it, training a model to creating an app, by taking an example of a Guitar Classifier which discriminates between Acoustic, Classical and Electric Guitars.
Let me know what you think about it :slight_smile:
4 Likes

Hi,

I have written a blog and developed a Covid19 chest X-ray image classifier based on the initial 3 lessons. Here are the links:



https://www.kaggle.com/krrai77/fast-ai-lesson-3-covid19-x-ray-classifier-app

Thanks
Raji

3 Likes

I made something that lets you convert a screenshot of a chess diagram into a FEN (chess file) that can be imported into chess apps like lichess. Don’t know if there are any other chess fans in here, but I have a few text message chains with friends where we share puzzles as screenshots, and this makes it easier to save/analyze them.

7 Likes

hey @GiantSquid, can you share the code/dataset to this?
:smiley: chess fan here

1 Like

Definitely! I will have to find some time to organize my files, but I’ll share something on github soon.

2 Likes

Hi GiantSquid hope all is well!

Great app!
I played with your app for a bit.
image
This was one of the images I tried.
I was wondering the following things.

  1. does your app work by classifying, segmentation or other.?
  2. how large was your dataset?
  3. does your app prefer particular colors of images.

Great to see another interesting use of fastai!

Cheers mrfabulous1 :smiley: :smiley:

Hey, glad you like it! For your image, the model expects it to be from white’s perspective (white pieces at the bottom), I should have mentioned that. I should really add a user input for black/white perspective, but this would require collecting some data from black’s perspective.

By the way, this might explain something odd about your diagram: the position is clearly from black’s perspective but the notation on the edge of the board is as though it were white’s. Did you recognize the image and then load the FEN back into chess.com? If so, I guess the model worked, but you’re going to have some unexpected behavior… your pawns will be moving backwards :slight_smile:

In answer to your questions…

  1. Segmenting then classifying. I segment in the simplest, most obvious way possible: divide the vertical and horizontal dimensions by 8 and slice into 64 squares. This is why the image needs to be cropped exactly to the board: otherwise the squares will be off. Of course, it’d be better to be able to find the board in a larger image, but I haven’t implemented this. Then I run a standard image classifier on the sliced square images.

  2. I believe I labeled 20 boards from each of lichess, chess.com, and chessbase (1 board -> 64 squares). I also set up a pipeline to generate synthetic data using python-chess, which looks to be the same piece set as lichess. Can’t remember if I used this in the latest model… Stay tuned for github repo/blog post :slight_smile:

  3. My training set was color images of whatever the default is for lichess, chess.com, and chessbase, so it should work best with those.

2 Likes

Hi guys,
I’m posting my first paper as first author :slight_smile:
We used ULMFit to train a model to generate molecules similar to the ones tested for SAR-CoV-1 and then fine-tuned a classifier to classify molecules against the main protease (Mpro) of SARS-CoV-2.

Some main findings:

  1. ULMFit can generate high proportions of valid, unique and novel molecules. This is in sharp contrast with other generative models described in literature. While some authors describe very low validity for LSTM/GRU/RNN models, others describe good results that were even better after using data augmentation. In our study, we showed that selecting the right sampling temperature can help users generate more than 99% valid molecules, even without data augmentation, that are also unique and novel compared to the training set.

  2. We showed that ULMFit can approximate the chemical space of the training set. When comparing the distributions of physicochemical descriptors, the generated molecules were very similar to the training set. In addition, we noticed minor (e.g., changing one atom for another) and major (e.g., removal or addition of whole parts of a molecule). This suggests that ULMFit can be used to generate new chemical matter. The potential of this for drug discovery is still not fully understood, but it could be interesting for I.P analysis.

  3. Our classifier did a pretty decent job and outperformed Chemprop, a message passing neural network that was trained with the same dataset. The training set was highly imbalanced, with only 265 active molecules in a ocean of 290K molecules. The training set was the bottleneck here, but I’m sure we can train better models as soon as new, high quality data is made available for Mpro inhibitors.

  4. We generated a library of molecules and classified it using our classifier. The predicted actives were filtered and submitted to molecular docking, in order to predict it’s binding mode on Mpro. Surprise, surprise! The top-20 predicted actives were very similar to known Mpro inhibitors. In addition, the binding mode also similar to the experimental interaction between known inhibitors and Mpro described in protein-ligand crystals.

Here’s a figure. Mpro aminoacids are shown as bege sticks and ligands as green sticks. A) Known inhibitor and B) generated molecule

You can check the preprint PDF here: https://assets.researchsquare.com/files/rs-90793/v1_stamped.pdf

We are also expanding this approach for other protein targets and prediction settings. In addition, we will test the predicted actives against Mpro to check if our model can be useful to guide drug discovery for SARS-CoV-2.

Please, let me know if you have any suggestion!

30 Likes

Congrats on the great results! Thanks so much for sharing here :slight_smile:

2 Likes