Share your work here ✅

Hi all :wave:t3:,

I have recently finished Lecture 1 and would like to share my first baby steps in deep learning :baby:t3:

In my blog post, I described the steps I took to create a basic CNN to recognize 40 characters of the TV show ‘The Simpsons’ on Google Colab, as well as the predictions of the model. :point_down:t3:

https://raimanu-ds.github.io/tutorial/can-ai-guess-which-the-simpsons-character/

3 Likes

I am currently working on a project involving automatically updating electric meter readings sent by our customers to our mainframe. I was taking part 1 of this course as a try and tested the fast.ai library with only 45 images: 15 analog meters, 15 standard, and 15 digital…the result? 87% accuracy!

Thanks fast.ai team!

Now, after adding another 25 images or so I’m getting this as a result:

Total time: 00:13

epoch train_loss valid_loss error_rate
1 0.369539 0.222055 0.000000
2 0.388205 0.288855 0.133333
3 0.341020 0.396107 0.266667
4 0.293078 0.345871 0.133333
5 0.318980 0.188861 0.000000
6 0.269424 0.203167 0.133333
7 0.274584 0.187801 0.000000

Am I overfitting?

Amir

in lesson 2 @jeremy said he had trouble showing us overfitting. is your data a meter reading of four and five digits like an odometer? if so it seems like you should do as well as the mnist digit rec. data models

Busy with practice on the first two lessons of DL1 2019.

Because of a, not so helpful yet, background in health care (MD, epidemiology), I joined the Histopathologic Cancer Detection competition on Kaggle. Goal is to classify histopathology slides/images for the presence of cancer (present or not-present).

Using resnet50, 75% of the data and only the basic TTA (no extras) I reached the top 8% of the competition. It’s not very fancy probably and I feel like a monkey pressing a button some moments. Still, it made me happy already :slight_smile:

3 Likes

i finally finished first half of lesson 2 download with much help from you all. the error rate seems high but i think my data is fairly hard to classify.
probably not a good idea to mix football and soccer but i was curious to see what would happen.

i think this link makes it avail to anyone interested.

and i think i have a loop running… on the cleaner widget. :slight_smile:

https://colab.research.google.com/drive/1xW5pnTcEysdEem6Mj29wUDgi6eDtvMhr

1 Like

Can networks pre-trained on ImageNet be good person recognizers? To answer this, I collected images of look-alike celebrities such as Keira Knightley vs. Natalie Portman (both being doppelgangers in Star Wars :wink:), trained a ResNet50 network and achieved 90% accuracy.

But wait, how can we be sure they learned features of faces rather than clothes, background, etc. :thinking: Thanks to Grad-CAM and lime we can justify the network’s decision-making process. Finally, I applied tSNE to visualize how the network divides faces into clusters (thanks to @henripal for methods).

For example, the algorithm is better at capturing male versus female features rather than skin color.

A similar project can be found on GitHub.

17 Likes

I wrote a post about collaborative filtering. I included an explanation and preliminary models in keras. I also added a brief about fast.ai functions for collaborative filtering at the end. I will polish the notebook on google colab and share as well.

Hey guys. I went through the lesson ipnb. After which i created a dataset consisting of elephant pictures organised into 2 folders. Indian elephant and African elephant.

I chose this topic as even for a person who is used to seeing elephants, differentiating an african one from an indian one is a pretty tough task.

When i trained on a total of 20 images, and the accuracy i got is 100% within 6 epochs.
I think the model is overfitting. How do i check this and if yes how do i solve the problem.

I had set valid_pct to 0.2 hence my validation folder contains 4 images. Working on a bigger dataset currently.

Please do give your suggestions and feedback.:grinning:

But i thought it was fairly easy to train a classifier for football vs baseball. Can you specify how many images you trained the model on?

Sure is! :smiley:

2 Likes

You can use your google drive to store the dataset and then upload it into your colab notebook using its inbuilt api:

from google.colab import drive
drive.mount(’/content/drive’)
paste this in a cell and run it. it will ask you an authentication code which you can get by clicking the link provided there. Once mounted you can use it like a local drive.

1 Like

I recently participated in “the first image-based structural damage recognition competition, namely PEER Hub ImageNet (PHI) Challenge” organised by the Pacific Earthquake Engineering Research Center. There were 8 detection tasks such as damage level and material type. This can aid disaster relief efforts through rapid classification.

It was open not only to earthquake/engineering research teams from academia and industry, but anyone who wanted to compete, so I joined from the geologically safe environs of my kitchen table in London and with no knowledge of anything to do with earthquakes.

I used fastai and came 1st in 4 of the 8 tasks. 2nd overall, just pipped to 1st by a team of researchers from Nanyang Tech Uni, Microsoft research, Shenzhen inst of tech, and ucal berkeley.

To me this challenge underlines that the power of deep learning is in that it democratises finding soutions to problems. You don’t need to be an expert in a field to add value if you have tools like fastai. The more open datasets become the better.

https://apps.peer.berkeley.edu/phichallenge/winner/

8 Likes

Truly amazed at how easy fastai makes working on image recognition. With just 80 images, the model recognizes lions and tigers without error.

State-of-the-Art-Results
Jeremy continually indicates that you can get world class results in a few lines of code. I was admittedly skeptical, but working through the first couple of lessons and applying the example code to a dataset of North American birds (NABirds), I was able improve upon current state-of-the-art accuracy rate (89.5% versus 87.9%). I have no illusions that this would stand if people with real experience worked on it and that people probably have better unpublished results, but it is still a huge confidence boost and am excited to continue the journey. Take a look at my results on github.


5 Likes

Hello everyone,

I wrote a routine to classify the most popular landmarks in Istanbul.

Landmark Classification with Convolutional Neural Networks

I’ve downloaded publicly available Instagram photos according to their hashtag with a script of instalooter library.
I’ve manually eliminated non-proper materials for trainings.
The dataset contains over 1500 images with 5 different labels (Maiden’s Tower, Galata Tower, Hagia Sophia, Ortakoy Mosque, Valens Aqueduct).
I’ve imported resnet50 model with imagenet pre-trained weights.
I’ve trained the model for 10 + 5 (fine-tuning) epochs.
Final losses can be examined in the following.

train_loss valid_loss error_rate
0.023614 0.072020 0.023605
1 Like

Hi everyone,

I’ve noticed that in Lesson 3, Image regression task (head coordinates) a slightly more complex procedure is required to convert 3D head position to screen coordinates. More specifically, other than intrinsic camera matrix multiplication (which is done in the original solution), there are also rotation and translation matrices which are used to define RGB-camera position relative to the depth-camera on Kinect (ground truth head positions are defined in depth-camera coordinates)

Here’s the relevant part of notebook which uses matrix multiplication for the 3D->2D coordinates conversion:

def convert_biwi(coords, cal):
    pt = cal @ np.append(coords, 1)

    return tensor([pt[1]/pt[2], pt[0]/pt[2]])

def get_ctr(f):
    ctr = np.genfromtxt(img2txt_name(f), skip_header=3)
    
    fcal = img2cal_name(f)
    
    cal_i = np.genfromtxt(fcal, skip_footer=6)
    cal_p = np.eye(3, 4)
    cal_rot = np.genfromtxt(fcal, skip_header=5, skip_footer=2)
    cal_rot = np.vstack([np.c_[cal_rot, np.array([0, 0, 0])], [0, 0, 0, 1]])
    cal_t_vec = np.genfromtxt(fcal, skip_header=9, skip_footer=1)
    cal_t = np.identity(4)
    cal_t[0, 3] = cal_t_vec[0]
    cal_t[1, 3] = cal_t_vec[1]
    cal_t[2, 3] = cal_t_vec[2]
    cal = cal_i @ cal_p @ cal_rot @ cal_t
    
    return convert_biwi(ctr, cal)

(what I didn’t get is why I had to swap x and y coordinates in the tensor([pt[1]/pt[2], pt[0]/pt[2]]) expression. Any advice?)

With that change, the validation error is more than 2x times lower than before: 0.000971

And after training it a bit more with half the original learning rate, the validation error decreased 10x times more: 0.000100!

The results seem to be insanely accurate:

Wow, that feels like magic.

5 Likes

Thats great. And the monkey part, you’re not alone.

Brazilian jiu-jitsu or Judo? I thought it would be interesting to try a classification challenge that most humans would find very difficult. Practitioners of both sports/martial arts wear similar clothing (the gi), and are usually grappling in the photos. My aspiration was that the classifier may come to realize there are some very subtle differences, for example, judo practitioners spend more time standing and often throw from an upright position, whereas Bjj takes place in larger part on the ground. Please see the notebook here. I’d love to read any suggestions for image augmentation or further improvements.

2 Likes

Your LR at cell 17 appears too high - see how your validation results get very unstable? Try 10x lower for both LR numbers there.

Last week I got the CamVid-Tiramisu dataset over 95% accuracy. This was during the first training run. From a pretrained resnet-34 to 95% accurate in a little over an hour!

image

1 Like