Share your V2 projects here

A new library:

I used this library for segment brain tumours:

6 Likes

wow. it took a lot longer to deploy this, than to train it.
First, I tried to deploy on Binder, but it seems that it does not work. I checked some of the ones that should work, but none does anymore. It seems broken.

However, behold the “What game are you playing” classifier:
a little classifier, that can detect what game you play, from a screenshot.

:dizzy_face: on Heroku the slug size is 935MB with the requirements from the course page and only the bear classifier example with a .pkl file of 46MB.

Finally: What game are you playing (on SeeMe.AI)

1 Like

Hi ringoo hope your having a fun day!
Well done for persevering, like you, I think the most difficult bit about creating any classifier model, is deploying it online easily at little or no cost, especially if one is still waiting to make their millions :money_with_wings: :moneybag: from it (pity the link makes you have to log in to see the model :question:).

Cheers mrfabulous1 :smiley: :smiley:

Not if you’re using CPU torch wheels. Were you using the instructions here?

2 Likes

@joedockrill Thank you for your comment. Yes, I used the instructions. The pytorch version in the Procfile is definitely the cpu version. I just copied it from the instructions.
It seems that I included the wrong repository on heroku and it used the myBinder requirements. Now that I have repeated all the steps, it seems to work.
I have created a separate repository for the heroku deployment: https://github.com/mesw/whatgame3.git, if you want to have a look at the files. Now the slug is only 368MB. Nice!

If you would like to take this little experiment for a spin, you can find it here:


too bad, there is a voila error (the same as with myBinder) and it does not work. :unamused:

Guess the Doodle App
I’ve created this App based on the knowledge from Lesson 1 &2, completely in Jupyter Notebooks giving some Material UI touch to the App.
Got inspired by Google Draw and built one of my own using FastAIV2.
Currently supports 5 Doodles (bird,cup,dog,face,fish).
Please do give it a try.

8 Likes

Hi hitchhicker30 hope all is well!
A great little app with an original front end very different from the most common starter code classifier app.
Good work.
Cheers mrfabulous1 :smiley: :smiley:

1 Like

Thank you!! @mrfabulous1

Hi everyone, I released my article on end to end image classification last week.


I have tried to cover the entire spectrum, right from gathering data, cleaning it, training a model to creating an app, by taking an example of a Guitar Classifier which discriminates between Acoustic, Classical and Electric Guitars.
Let me know what you think about it :slight_smile:
4 Likes

Hi,

I have written a blog and developed a Covid19 chest X-ray image classifier based on the initial 3 lessons. Here are the links:



https://www.kaggle.com/krrai77/fast-ai-lesson-3-covid19-x-ray-classifier-app

Thanks
Raji

3 Likes

I made something that lets you convert a screenshot of a chess diagram into a FEN (chess file) that can be imported into chess apps like lichess. Don’t know if there are any other chess fans in here, but I have a few text message chains with friends where we share puzzles as screenshots, and this makes it easier to save/analyze them.

7 Likes

hey @GiantSquid, can you share the code/dataset to this?
:smiley: chess fan here

1 Like

Definitely! I will have to find some time to organize my files, but I’ll share something on github soon.

2 Likes

Hi GiantSquid hope all is well!

Great app!
I played with your app for a bit.
image
This was one of the images I tried.
I was wondering the following things.

  1. does your app work by classifying, segmentation or other.?
  2. how large was your dataset?
  3. does your app prefer particular colors of images.

Great to see another interesting use of fastai!

Cheers mrfabulous1 :smiley: :smiley:

Hey, glad you like it! For your image, the model expects it to be from white’s perspective (white pieces at the bottom), I should have mentioned that. I should really add a user input for black/white perspective, but this would require collecting some data from black’s perspective.

By the way, this might explain something odd about your diagram: the position is clearly from black’s perspective but the notation on the edge of the board is as though it were white’s. Did you recognize the image and then load the FEN back into chess.com? If so, I guess the model worked, but you’re going to have some unexpected behavior… your pawns will be moving backwards :slight_smile:

In answer to your questions…

  1. Segmenting then classifying. I segment in the simplest, most obvious way possible: divide the vertical and horizontal dimensions by 8 and slice into 64 squares. This is why the image needs to be cropped exactly to the board: otherwise the squares will be off. Of course, it’d be better to be able to find the board in a larger image, but I haven’t implemented this. Then I run a standard image classifier on the sliced square images.

  2. I believe I labeled 20 boards from each of lichess, chess.com, and chessbase (1 board -> 64 squares). I also set up a pipeline to generate synthetic data using python-chess, which looks to be the same piece set as lichess. Can’t remember if I used this in the latest model… Stay tuned for github repo/blog post :slight_smile:

  3. My training set was color images of whatever the default is for lichess, chess.com, and chessbase, so it should work best with those.

2 Likes

Hi guys,
I’m posting my first paper as first author :slight_smile:
We used ULMFit to train a model to generate molecules similar to the ones tested for SAR-CoV-1 and then fine-tuned a classifier to classify molecules against the main protease (Mpro) of SARS-CoV-2.

Some main findings:

  1. ULMFit can generate high proportions of valid, unique and novel molecules. This is in sharp contrast with other generative models described in literature. While some authors describe very low validity for LSTM/GRU/RNN models, others describe good results that were even better after using data augmentation. In our study, we showed that selecting the right sampling temperature can help users generate more than 99% valid molecules, even without data augmentation, that are also unique and novel compared to the training set.

  2. We showed that ULMFit can approximate the chemical space of the training set. When comparing the distributions of physicochemical descriptors, the generated molecules were very similar to the training set. In addition, we noticed minor (e.g., changing one atom for another) and major (e.g., removal or addition of whole parts of a molecule). This suggests that ULMFit can be used to generate new chemical matter. The potential of this for drug discovery is still not fully understood, but it could be interesting for I.P analysis.

  3. Our classifier did a pretty decent job and outperformed Chemprop, a message passing neural network that was trained with the same dataset. The training set was highly imbalanced, with only 265 active molecules in a ocean of 290K molecules. The training set was the bottleneck here, but I’m sure we can train better models as soon as new, high quality data is made available for Mpro inhibitors.

  4. We generated a library of molecules and classified it using our classifier. The predicted actives were filtered and submitted to molecular docking, in order to predict it’s binding mode on Mpro. Surprise, surprise! The top-20 predicted actives were very similar to known Mpro inhibitors. In addition, the binding mode also similar to the experimental interaction between known inhibitors and Mpro described in protein-ligand crystals.

Here’s a figure. Mpro aminoacids are shown as bege sticks and ligands as green sticks. A) Known inhibitor and B) generated molecule

You can check the preprint PDF here: https://assets.researchsquare.com/files/rs-90793/v1_stamped.pdf

We are also expanding this approach for other protein targets and prediction settings. In addition, we will test the predicted actives against Mpro to check if our model can be useful to guide drug discovery for SARS-CoV-2.

Please, let me know if you have any suggestion!

30 Likes

Congrats on the great results! Thanks so much for sharing here :slight_smile:

2 Likes

Hi everyone,

Thought I would share my project here. I am working on analyzing Li-ion battery degradation through deep-learning techniques mostly using LSTM-RNN networks. Li-ion batteries have over 10 different major degradation mechanisms that may have co-dependence and together they result in battery capacity fade. However, only three factors (voltage, current, and temperature) are controllable in the operation of commercial Li-ion batteries. I plan to study how these different mechanisms interact (shown in the figure below from a research paper)

I converted cycling data from my battery experiments as input and trained an LSTM-RNN network to predict the capacity. As Jeremy had pointed out that don’t shuffle data in a time-series problem, so I used non-shuffled data for training which is the first 20% cycles and validated on the remaining 80% cycles (figure shown). Since the test condition is constant, degradation mechanisms remain more or less the same and this makes it a fairly easy problem to predict.

However, Li-ion batteries suffer from accelerated degradation when they reach the end of life (you may know by experience when your batteries suddenly start to die fast in phones or laptops). My model is unable to capture that end effect yet. I can increase the training cycles from 20% to resolve this issue. But, if the model is able to capture the long-term effect more accurately in the first 20% cycles, a major problem of the research and battery development community can be solved which is reducing battery development & testing time. Battery cycling may sometimes take months or even years which makes it a very costly process (i have personally tested some Li-ion battery samples for as long as 10 months just on one test condition). Several researchers have studied these degradation mechanisms and have put forth physical equations that govern these mechanisms. However, to use these physical equations one must be able to parameterize the cell which may often include up to 50+ variables, and often these parameters are not provided by the battery manufacturers. I was hoping that the LSTM-RNN would be somehow able to extract back these parameters from the data (I don’t know how though). The next step I thought was to embed the known physical equations in the network by modifying the loss function and defining boundary conditions (by punishing the network for predicting physically impossible results such as battery capacity increasing).

Looking forward to connecting with folks who are working with RNN networks (especially LSTM). If anyone knows or had tried physics-guided networks would be willing to help me out (I have lots of doubts), i would appreciate the help. Stay safe!

Thanks
Ravin Singh

13 Likes

Here’s the github repo with code, notebooks, data etc.

5 Likes

Tried Line art using GAN, got some amazing results. After lot of struggle I am getting proper lines around the face.

Hope you guys like it.

https://twitter.com/Vijish68859437

beard2-horz kaenu-horz

rob d-horz

15 Likes