Unofficial release of part 1 v2


(Nadine) #43

Hi everyone,

thanks a lot to Jeremy for the great course, it’s helping me a lot.

I tried to use the ideas from the Lesson 1 Notebook to participate in the IEEE Camera Model Identification Challenge at Kaggle. I get a validation accuracy around 50% which I think is not too bad. However, when I make and submit predictions on the test set, I seem to produce random noise and get only around 10% at the leaderboard.

Does anyone have any idea what I am doing wrong?

Thanks a lot!!


#44

There are two things that can be happening - you either messed up in creating your submission or more likely your validation set is not representative of the test set.

From what I hear about the competition, likely it is the latter but don’t take my word for it :slight_smile:

Here is the ultimate resource imo on learning about the train - val - test splits. http://www.fast.ai/2017/11/13/validation-sets/

I don’t mean to sound like a dumb ass (saying this normally preceeds people sounding like a dumb ass so please accept my apologies for indeed being a dumb ass :wink: ) but in the future would you be so kind please and create a separate thread or find a thread which already discusses what you want to ask about? I am sure happy to help, but as it is right now, what we are discussing here has nothing to do with the original intent of this thread and it will make using this forum much harder and annoying for people who come here in the future

Many thanks and once again please accept my apologies! :slight_smile: Hate to be the thread police but maybe via slightly better house keeping we can all make this forum even more useful to others :slight_smile:


(Nadine) #45

Hi Radek,

thank you very much for your answer. This was my very first post here and I was not quite sure whether I should really open a new thread for this or just ask here as I was using the Lesson 1 Notebook. Thanks for explaining that this is okay in the future. I now found a thread by someone with the same problem and I think you are right with your assumption.

Many thanks to you!


(Navin Kumar) #46

It would do a world of good if you start watching Part 1 v2. Part1 v2 has many new things. You would learn new things that’s not taught in Coursera course…
hope it helps


(Clemens Adolphs) #47

Fantastic! I just started doing the course because I love the approach to teaching you set out in the intro.

I just signed in to the forums for the first time because I couldn’t get the Keras VGG16 to train properly as compared to the “Part 1 V1” VGG16 version, but now I’m super excited to see how the Version 2 turns out.


(Jerryphan) #48

Hi everyone jeremy!
Wishing jeremy a good year full of health


#49

Hey Nad,

Great that you are participating in this Kaggle competition! There is a post in the discussion section that says:

Test data is different than train in 3 big ways (then there’s smaller issues);

  1. 50% of it is manipulated as per the instructions
  2. Taken from a different device (BIG issue)
    
  3. Center-cropped, fixed res.
    

(source: https://www.kaggle.com/c/sp-society-camera-model-identification/discussion/47896)

I am also participating. Let me know if you wanna exchange ideas. I am currently working with something based on this code: https://www.kaggle.com/c/sp-society-camera-model-identification/discussion/47896

Good luck!


(Nadine) #50

Hey fabsta,

thanks so much for your reply. Your code looks great, I’m far from producing anything like it :wink: I will definitely have a deeper look!


(Sally Shrapnel) #51

Hi,

Is the Brisbane group open? I am currently working through the course on my own and would like to join a group if possible.

Sally