Lesson 7 further discussion ✅

Sorry if this question is a bit offtopic but I think it is worth to ask it one more time.

Recently I had a quite rough time while trying to tackle a Kaggle dataset from Quick Draw Doodle competition. I am still not sure if the problem was related to PyTorch or Python multiprocessing itself.

How do you guys usually tackle problems like this? Like, at the end of the day, even with Deep Learning we still have memory leaks, overflows, multiprocessing, i.e., this “mundane” stuff of programming world. How do you usually organize your workflow? I feel like using PyTorch/fastai and standard Python library not enough to get scalable and robust machine learning pipeline. Though probably it is just my impression.

Would be happy to hear any advice from seasoned Kaggle competitors and experienced Data Scientists!

1 Like