Congratulations Team Radek: First FastAI_v1 silver medal!

Looks like the exciting times are here even before the course started!

Team @radek did it again, with a silver medal finish on the TGS competition-bringing a silver medal to the fastai hall of fame.

Congratulations! :smiley:

Link to Radek’s Tweet and Code

17 Likes

congratulation!

congrats

just quick ‘stupid’ question. why everyone were resizing images to 128px. is that a requirement of U-net?

Thank you :slight_smile: It was very interesting to read the kaggle forums for this competition - seemed like there was an explosion of people using the fastai library :slight_smile:

Here is a link to a tweet by a 22nd place solo finisher… that is a 22nd place finish out of 3291 teams! Now that is an impressive finish! Congrats Vishnu!

Also, the starter code that you posted a couple of days ago was great (and I think could be very useful for anyone willing to experiment with the dataset).

11 Likes

Convolutional arithmetic makes it a bit harder to work with uneven number of pixels. Ideally you want to go for a power of 2 (that is also what most of the unet archs support out of the box).

Another popular choice (apart from resizing to 128 or 256 px) was padding to obtain either of the sizes. I first resized the images to 202 and then padded with 27 px on each side to get 256.

4 Likes

Which technique give you the greatest boost on your score? I tried the scse blocks, but it does not improve my score much.Lovasz loss does give me ~0.01 boost.
People claims they can get 0.84 after using the lovasz loss and scse block. I tried using fastai UNET, get something around 0.8~0.81.

I also used fastai v1 to finish with a silver medal in TGS competition. BIG Congrats to all the fellow fastai classmates for your top medal ranking solutions @radek, @VishnuSubramanian and @wdhorton !! (Apologies if there are others I missed)

8 Likes

Thanks @radek @jamesrequa.

Here is Final code which gave me the best LB score.

10 Likes

Thanks @jamesrequa! I’m actually excited the competition is done because all my code was on fastai 0.7. Now I get to make the switch to 1.0 just in time for the new course!

2 Likes

Same here, I ended up around 0.82. I joined very late; and will not make that mistake again :smile: . The K fold validation took more than 6 hours to train for proper convergence, it was a pain. I will be actively participating once I get my hands on a decent GPU setup, otherwise it is not productive. Also, @VishnuSubramanian’s fastai baseline kernel helped a lot.

1 Like

Shout out :beers: to you guys for bringing fastai to kaggle. The rate at which you guys progressed in the last one year is truly inspiring. I hope you guys never stop untill you reach the top :smiley:.

3 Likes

Congratulations legends!!

@radek from nb_006 import * but I can’t find anyfile for that (the import is in base notebook)

Congrats @radek! I have been following your progress since fastai V2 part 1. The amount of time and dedication that you have put in is outstanding and the results are there to see :clap:

1 Like

Same goes for @jamesrequa. Again very enterprising and leading in all discussions. Now you are a full time entrepreneur/freelancer involved in DL and practising to the fullest and making a living out of it as well. Very very inspired by both of you :ok_hand:

1 Like

@VishnuSubramanian I have got your pytorch course from Packt and have started studying the same. Very well written. All the best in your endeavours

Thanks that you found it useful. I would strongly recommend to use fastai material as the book was written for older version of PyTorch.

Interesting! A lot of custom losses and layers in the solution code, as I can see.

It seems that Kaggle’s silver doesn’t mean the second place, right? :thinking: