Firstly, Thank you Jeremy and Rachel for this wonderful course. The model of teaching how to do stuff and slowly introducing nitty-gritties is such a win in domains like deep learning where information on the internet is still not easily accessible to everyone. Having been on Kaggle for almost two years, I really like the way concepts are taught. And yes, spreadsheets rock!
Special thank you to you Jeremy. I am very pleased to inform that I have been able to apply the techniques you have taught in class 4 which helped me win the recommendation engine challenge on Analytics Vidhya. (https://datahack.analyticsvidhya.com/contest/mlware-2/lb). I was happy to have learnt and apply the concepts well on a new dataset and yes, it worked out quite well.
To give you certain statistics, for the recommendation engine problem statement, a single Keras-CF model would take me to 12th position. An ensemble of 5 such models would take me to 4th position. (Just as you mentioned about the 2% improvement in the class). A weighted ensemble with LightGBM is good enough for a second place and last moment teaming up gave us the win. This wouldn’t have happened without fast.ai. Thank you. You people are doing a fantastic job.
I’ll be happy to share the dataset if anyone is interested in working on that dataset.
Unfortunately, I couldn’t finish the exercises before Feb 27th and hence I couldn’t apply for the part-2 course. Eagerly waiting for the part-2 MOOC.
Having said that, if there is anyway we could contribute to the community, let us know. I owe you one.