Share your work here ✅

When I said this, I really meant it :slight_smile: Here is a new addition - fluke detection.

Here are a couple of detections:
Dvvl-ICXcAISMWQ

I was quite surprised that doing something so simple (though with the use of a pretrained model) could give such good results. Also, I only used 300 examples for training.

I annotated the images myself (details in the NB) and I found the process really valuable. I learned a lot about the data and it gave me good food for thought on what I would like my model to do. If I ever have to go through such an ordeal again, possibly with more data, I am getting a gaming mouse!

16 Likes

Thanks Jeremy. I just put the finishing touches on the code and the notebook which gives a walk-through, and put it on github here.

1 Like

Not deep learning (yet) but I acquired all of the skills used to build an explainable model of biological age here.

Wood T, Kelly C, Roberts M and Walsh B. An interpretable machine learning model of biological age

Let me know if you can think of any way to make it better!

3 Likes

Finally had time to read the Loss Landscape Visualization Paper (presented during NeurIPS 2018) recommended during our Lesson 7.

I have written this Medium post summarizing the paper. Mostly going section by section, and understanding the concepts.

As said during the lecture, the paper is pretty accessible. There are certain terms that can appear as new, so I have provided the necessary links to better understand them.

3 Likes

Update on returning an image. Right now I’m saving the output image as a png and using an html file to display it. However I’ve run into a weird issue where the model predictions from the web app are very different from the predictions in a jupyter notebook. For example:

Input:
Abyssinian_61

Notebook prediction:
test

Web app prediction:
saved_image

Edit:

This was a weird one but I figured it out. Predicting on an image eventually passes through the loss_batch function which gets a prediction but also applies whatever loss function is present in the model. I wasn’t specifying a loss function so it defaulted to cross entropy, which resulted in my output getting put through a sigmoid, causing the above issue. Coding my loss function into the app and passing it to the learner fixed things.

This behavior though seems a bit weird. I feel like a loss function shouldn’t impact predicting on a trained model.

Hi all,

I just wrote my first blog on building a deep learning model to distinguish pictures from 10 animated classic movies (Disney’s The Little Mermaid, Beauty and the Beast, Pocahontas, Tarzan, Mulan, Hercules Disney and Ghibli studio’s Castle In The Sky, Howl’s Moving Castle, Kiki’s Delivery Service and Princess Mononoke), including:

  • Guides to extract images from movie frames
  • Validation set splitting strategy
  • Fastai training with Resnet50 to reach 94% accuracy
  • Debugging model with Grad-CAM
  • Publish it as a simple web application using Amazon Beanstalk (all thanks to @pankymathur) which also includes the Grad-CAM visualization for your uploaded image.

There are few interesting thing when doing this project, but perhaps the most surprising one is how well resnet model can recognize real-life image of cosplayers, even though it was trained entirely on a different data distribution (2D images from movies and black-and-white sketches).
Here are few correct predictions where model actually focuses on human faces to make decisions:

Blog post is on Medium (also on my personal site. Live demo is here. Any feedback is welcomed!

14 Likes

Hi everybody,

I built a model on the Tamil character dataset. It has 125 data classes and each class has 100 images.( These are not handwritten data.) I havent done any preprocessing at all.

I was able to achieve an accuracy of 66.75% was obtained using CNN and Resnet 34. Here is the link to my github repo - https://github.com/bhuvanakundumani/Tamil_char_recognition.git

I would be happy to receive feedback/ suggestions to improve my accuracy.

Thanks

3 Likes

Hi everyone,

I was curious about whether a CNN could learn to count objects in images. It turns out it is an active area of research, with at least two approaches, one via detection (bounding boxes), and another via regression.

I wrote a notebook where I explore the task of counting objects by regression, where the task is to count the number of horizontal rectangles in images with both horizontal and vertical rectangles on a black background. I tried to design the experiment challenging enough so that obtaining a CNN with accurate predictions could be interpreted as it learning to count. Then I dived into analyzing the testing performance to conclude whether the CNN generalizes well.

Details in the notebook and readme, but the conclusion is that the CNN shows very good performance learning to count objects, with a very interesting ability to generalize to images generated with parameter values not seen while training.

Images: synthetically created. The number on top with the label of the image (target variable for regression), which is the number of horizontal rectangles.

The images were generated using 3 parameters: number of objects (horizontal rectangles: label); total number of rectangles, and size of the rectangles (constant within each image). For training the images were generated using certain values, while for testing images, additional new values were used, in order to later evaluate the capacity of the trained CNN to generalize well beyond training.

The number of objects in training is between 5 and 45 (only 28 values in this range), while for testing all values from 0 to 50 were used.

The reason to put both horizontal and vertical rectangles, as opposed to only horizontal ones, was to prevent the CNN to learn the easy correlation between white pixels and counts of rectangles.

Train/Valid Loss (MSE):

Performance: the mean absolute error (MAE) on validation was 1.4, and on testing (including images generated with values different from those in training) went up to 2.3.

Plots: Actual vs Predicted, Actual vs Error, Relative Error distribution:

  • Testing for images with known values for image parameters:

11 Likes

Hi,
I could achieve a accuracy of ~89% using your code by modifying the learning rate and trained it for more #epochs, find my code : https://gist.github.com/shyampagadi/1215938496fdf4cb72977840dfb66ec2

2 Likes

Hi,
I have been working with medical imaging and the fastai library has been really helpful doing my experiments, so I’m sharing some of the work I have been doing: https://github.com/renato145/fastai_scans
It have a datablock api heavily inspired by the one for images (with visualizations in 2d and 3d), some simple ready to call models, a 3D version of dynamic UNet, and some transformations (atm not as good as the affines in vision).

On the notebooks folder there is an example for segmentation and some transformations. There are also methods for doing classification and parallel classification+segmentation (although I haven’t put examples for those yet, because most of the work I have been doing was on private data >.<). I hope it can be useful :slight_smile:.

Pd: What good public dataset are there that mixes classification and segmentation tasks for medical images?

9 Likes

Since, we have been using extensively Leslie Smith’s research during the course (plus I consider the interview as Lesson 8), I decided to take a look, at original 2018 Neural Network Hyper-Parameters Paper.

I have written this Medium post summarizing the key parts of the paper, and understanding the concepts.

If you have completed the course, you will find the paper accessible, also it is pretty nice to go through its sections one by one and observe the research process.

6 Likes

Often I find myself feeling burdened by the slow progress I’m making in my Deep Learning journey… but I kept that aside for a bit to write up some thoughts, insights, handy todos and setup guides for the non-geeks trying to do Deep Learning:

Special thanks to @jeremy and @rachel - your positive influence made this possible.
Also thanks to @wgpubs, @hiromi and @rob for their posts on vim setup for fastai. I built on their inputs to get my setup guide going.

5 Likes

I’ve created a surname language classifier with 67% accuracy over 16 languages based on the PyTorch example using fast.ai.

I treat the activations from late in the network as an embedding for the names and use it to find “similar” names using the distance in this embedding space. It doesn’t perform fantastically, but is a useful proof of concept for better performing models on more complex tasks.

> closest('Thomas')

Manus (Irish): 49.601768493652344
Jemaitis (Russian): 56.250274658203125
Horos (Russian): 63.35132598876953
Klimes (Czech): 73.13825225830078
Bertsimas (Greek): 73.65045166015625
Tsogas (Greek): 79.87809753417969
Simonis (Dutch): 85.69441223144531
Honjas (Greek): 86.7238998413086
Mihelyus (Russian): 87.06715393066406
Grotus (Russian): 88.79036712646484

Highlights:

  • We can create our own character level Tokenizer, wrap it in a TokenizeProcessor and pass it to a text list as a processor
  • Using a balanced training set (which means including objects from rare cases multiple times) gives a much better result on a balanced validation set (with an equal number from each class). The full training set gives ~30-40% accuracy where the balanced set gives ~60-70%.
  • The default fast.ai text classification model works really well in this context without tuning. This is pretty astounding given it was tuned for word-level tokens.
  • Embeddings give much better results than one-hot encoding the inputs (the fast.ai text classification does this for you)
4 Likes

I’ve created this tool Code with AI which tries to solve a problem I used to face, that is while taking part in a competitive programming competition I sometimes wished for something which would tell me (or give hints) as to which competitive programming concept will be used to solve that problem!
Thanks to @jeremy and fast.ai, I have been able to solve this problem!

Here’s a Demo

Some details about the current model: It solves this multi-classification problem, with >80 different classes of problems, with an unbalanced factor of ~100 and uses pretrained wikitext103.

The current model has an F1 score of ~49 and I’m thinking of improving it further by using bidirectional RNN and an approach similar to DeViSE paper, i.e instead of training the model to predict O or 1, train the model to go closer towards the embedding vector representation of labels - the intuition behind this is that labels in competitive programming like graph, dfs, bfs etc aren’t disjoint. Will share the results I get with that approach.

6 Likes

Hello everyone,

I had never participated in any machine learning hackathons, with fastai tabular and i made an attempt and participated in 2 hackathons where i secured rank 152 and rank 31 respectively, although first hackathon i did’nt include all the tables of training data.

https://github.com/PrajwalPrashanth/GenpactMlHackathon/blob/master/all3weekmod.1500200(1).ipynb , This is the link to the notebook for the second hackathon. I tried to change embeddings, different no. of neurons for continuous data… but did’nt help to imporve the score.

I wanted to know what more could have been done like any data manipulation steps or different architecture of NN or any other suggestions about going on with tabular data.

3 Likes

very cool!

Hey Dave, just to let you know that your project is awesome and the published notebooks super helpful to learn from. Thank you!

I was able to deploy my model (resnet-34 trained on Plant Village dataset) on render, it’s super fast and super easy like fastai. all thanks to @anurag

Notebook : https://nbviewer.jupyter.org/github/shubhajitml/crop-disease-detector/blob/master/notebook/plant_village.ipynb

Demo : https://which-crop-disease.app.render.com/

Any suggestion for improvement is appreciated.

6 Likes

I have implemented semantic segmentation for Pascal VOC dataset using fastai v1.0.34 library. I have made a GitHub repository:
https://github.com/keyurparalkar/Semantic-Segmentation-with-UNETs.

Feedback and suggestions for improvement are appreciated.

3 Likes

Does your dataset contain questions from codechef only?