Congratulations Team Radek!


(Sanyam Bhutani) #1

Two great finishes by Team @radek

Congratulations! :smiley:


#2

Thank you very much @init_27 :slight_smile:

I am planning to do a more detailed write up over the next day or two - for the time being here is a Twitter thread with a brief overview if anyone would be interested.


(Sanyam Bhutani) #3

@radek Could you kindly share some more insights into how would you approach prototyping in such “big data” problems.

How do you approach the problem and when do you decide to scale your model to the complete dataset?

Thanks.


#4

I used a custom sampler that partitions the dataset into 40 000 crops per epoch. I trained with a batch size of 128 using 1 cycle policy and SGD which with smaller models meant I could train for 1 epoch in ~ 40 seconds.

I started with 32x32 crops and resnet34. Initially for test I only used detections from yolo with score > 0.25. That gave me a test set of ~100k relationships IIRC.

I gradually moved up to crops of 128x128 and training bigger models.

I use two models (one for the ‘is’ relationship one for all the others) and my train set for the first one was ~300k images and for the second one over a million. Because I use the custom sampler, the size of the train set was not a problem - I could train for as many epochs as I wanted.

Ultimately, I trained inceptionv4 for 100 epochs for the non ‘is’ relationships and ~ half of it for the ‘is’ relationship. Training for 100 epochs took just above 6hrs which I did overnight.

As for the test set, because of the metric used (mAP), it couldn’t hurt and actually improved the score the more detections one included, even with lower confidence. As I lowered and ultimately removed the detection threshold, my test set contained nearly 8 million images. I compute a lot of things on the fly when I initiate the datasets (I use custom datasets talking to mongodb, I calculate things like IOUs and store coords from the csv file as instance variables upon dataset initialization) so to detect on that many object pairs I had to partition the test set into 3 batches and concatenate the results.

The library integrates seamlessly with a custom pipeline as above and the methodology I use is the one we learned in class - starting small and growing the solution organically.


#5

I wrote a few more words on the solution on Kaggle.

I wanted to write a post on these forums speaking to the process of how I work towards a solution and at the same time answer some of the questions I get asked. The multiple versions of the post I came up with had a couple of fatal flows:

  • were too long
  • left me emotionally exposed to the extent that is probably unnecessary and that I don’t feel comfortable with
  • somehow managed to portray whatever it is that I do as remarkable which left me feeling as a fraud

Because the truth of the matter is that nothing that I do is remarkable in any way I can imagine. And I don’t care that it isn’t. In fact, it is quite comforting.

See my post above, the one where I speak to using mongo and the custom dataset / sampler? Well, it sounds like I am doing something advanced - but I have not used mongo before this competition and I mostly use very basic functionality you can learn from the mongo tutorial (and the less trivial bit I got from stackoverflow)! The sampler I wrote for the earlier competition I participated in and I think I even pushed to my github, its just a couple of lines of code.

It’s easy to make something seem non-trivial if you speak about it in an off-hand manner even if it took you 2 or 3 months to arrive at! Incremental changes over a longer period of time can make a world of a difference.

And yes, I started working on this competition when it launched (and I think this is the only reasonable thing to do on Kaggle when you are just starting out, to join a competition as it launches).

But I digress. Yes, I do not posses any secret knowledge nor any particular abilities that would make me stand out

But over the years I have accumulated ideas that do not belong to me, that are not mine, but that I feel make me now quite confident in tackling any ML or programming problem. Many of them come from the lectures but I feel that maybe it helps to be reminded of them every now and then. And maybe such a reminder carries a different weight if you hear it from someone literally in the same boat as you and who probably struggles with the same problems as you do.

Or maybe it is just that we need to hear an idea a couple of times before we can adopt it. Whatever the reason, happy to do the reiterating.

As such, I think I found a solution to my dilemma. Not caring too much about what happens to my Twitter account (as I genuinely feel this might be the end of it), here is what I am going to do:

Untill further notice I will tweet out a single idea each day before work that I find useful to programming or competing in a Kaggle competition

This solution is nice because it allows me to step outside my comfort zone in a manner I feel okay with. I do not put too much on the line - having a bunch of awesome followers on Twitter is nice but should they run for the hills as a result of me doing this, well, this does not seem like too big of a deal. And last but not least, I do not feel like I am doing this for myself so maybe this is the right thing to do.


#6

When you compress 3 months of work in one post, it always seem a bit like magic :slight_smile: Especially if reader can’t see day-to-day process.

I think it works both ways. Sometimes, during daily struggles we miss the bigger picture and how much we learned/accomplished. I like doing 3 months reviews: what to learn/what I have learned.


(Sanyam Bhutani) #7

Thank you so much for the extensive and honest details!

And Looking forward to reading your tweets :slight_smile:


(Even Oldridge) #8

The tenacity to push hard and do these things, even if they are in their own right unremarkable, is to me a huge undertaking and achievement and in that sense is remarkable. Working every day on something, and not giving up is a big part of what makes good people great. I know we barely ‘know’ each other, but I’m super proud of you for this accomplishment. And I find it inspiring! So thank you for sharing.


(Jose Quesada) #9

Radek, this shows amazing character; whoever ends up hiring you will have a fantastic team member, and my guess is that you will help many other people in your career. I told you last time we met: I want to help you any way I can.

To anyone reading: humility, curiosity, and empathy (peppered with hard work) is a telltale sign of someone who will be successful, or at least, happy. Radek is showing he has these things here. Even without the great Kaggle results, I could have anticipated a star raising by just reading those words.


#10

I feel that I had a hard time answering the questions because of all the great information in the fast.ai lectures. They cover how to learn, how to approach ML problems, how to reason about code, and much, much more.

The information is there so why ask me for help?

The reason is likely that we have a hard time believing simple things can work. Or that they might work for me.

I generally do not have a lot of answers and I seem to continue to be puzzled by things more than anything else.

When it comes to ML or programming though, I do however suspect that the simple answers we receive in the course are all there is.

Working on a new ML problem? Start with a simple model and try to improve it.

Want to take part in a Kaggle competition? Figure out how to make the first submission and see if you can make it a little bit better.

Want to learn the fast.ai material? Work through the notebooks like described in lectures. Make them your own. Repeat with a new dataset. Explain the concepts to others. Blog or use Twitter for full experience.

I do not have a very strong need to be right but in general it is good to evaluate things based on the outcomes they bring and the recipe as presented in the course seems to be working for me for the things I care about.

I know nothing advanced. Doing the simple stuff is all there is so it seems. Also, whenever someone tries to convince me that advanced things exist and that it would be in my best interest to know them, generally the best strategy seems to be to run for the hills.

Below is a collection of all the tweets